00:00:00.001 Started by upstream project "autotest-per-patch" build number 132308 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.141 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.142 The recommended git tool is: git 00:00:00.142 using credential 00000000-0000-0000-0000-000000000002 00:00:00.144 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.172 Fetching changes from the remote Git repository 00:00:00.178 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.204 Using shallow fetch with depth 1 00:00:00.204 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.204 > git --version # timeout=10 00:00:00.241 > git --version # 'git version 2.39.2' 00:00:00.241 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.265 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.265 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.480 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.491 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.505 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.506 > git config core.sparsecheckout # timeout=10 00:00:05.517 > git read-tree -mu HEAD # timeout=10 00:00:05.533 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.551 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.552 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.655 [Pipeline] Start of Pipeline 00:00:05.669 [Pipeline] library 00:00:05.671 Loading library shm_lib@master 00:00:05.671 Library shm_lib@master is cached. Copying from home. 00:00:05.689 [Pipeline] node 00:00:05.700 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.702 [Pipeline] { 00:00:05.716 [Pipeline] catchError 00:00:05.717 [Pipeline] { 00:00:05.731 [Pipeline] wrap 00:00:05.741 [Pipeline] { 00:00:05.750 [Pipeline] stage 00:00:05.752 [Pipeline] { (Prologue) 00:00:05.974 [Pipeline] sh 00:00:06.263 + logger -p user.info -t JENKINS-CI 00:00:06.278 [Pipeline] echo 00:00:06.280 Node: WFP8 00:00:06.288 [Pipeline] sh 00:00:06.591 [Pipeline] setCustomBuildProperty 00:00:06.603 [Pipeline] echo 00:00:06.605 Cleanup processes 00:00:06.611 [Pipeline] sh 00:00:06.899 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.899 1185817 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.914 [Pipeline] sh 00:00:07.203 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.203 ++ grep -v 'sudo pgrep' 00:00:07.203 ++ awk '{print $1}' 00:00:07.203 + sudo kill -9 00:00:07.203 + true 00:00:07.220 [Pipeline] cleanWs 00:00:07.231 [WS-CLEANUP] Deleting project workspace... 00:00:07.231 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.237 [WS-CLEANUP] done 00:00:07.241 [Pipeline] setCustomBuildProperty 00:00:07.255 [Pipeline] sh 00:00:07.538 + sudo git config --global --replace-all safe.directory '*' 00:00:07.623 [Pipeline] httpRequest 00:00:08.122 [Pipeline] echo 00:00:08.124 Sorcerer 10.211.164.20 is alive 00:00:08.135 [Pipeline] retry 00:00:08.137 [Pipeline] { 00:00:08.151 [Pipeline] httpRequest 00:00:08.155 HttpMethod: GET 00:00:08.156 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.156 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.177 Response Code: HTTP/1.1 200 OK 00:00:08.178 Success: Status code 200 is in the accepted range: 200,404 00:00:08.178 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:35.019 [Pipeline] } 00:00:35.036 [Pipeline] // retry 00:00:35.046 [Pipeline] sh 00:00:35.332 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:35.348 [Pipeline] httpRequest 00:00:35.691 [Pipeline] echo 00:00:35.692 Sorcerer 10.211.164.20 is alive 00:00:35.700 [Pipeline] retry 00:00:35.702 [Pipeline] { 00:00:35.715 [Pipeline] httpRequest 00:00:35.720 HttpMethod: GET 00:00:35.720 URL: http://10.211.164.20/packages/spdk_ca87521f7a945a24d4a88af4aa495b55d2de10da.tar.gz 00:00:35.722 Sending request to url: http://10.211.164.20/packages/spdk_ca87521f7a945a24d4a88af4aa495b55d2de10da.tar.gz 00:00:35.725 Response Code: HTTP/1.1 200 OK 00:00:35.725 Success: Status code 200 is in the accepted range: 200,404 00:00:35.725 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_ca87521f7a945a24d4a88af4aa495b55d2de10da.tar.gz 00:00:56.798 [Pipeline] } 00:00:56.815 [Pipeline] // retry 00:00:56.824 [Pipeline] sh 00:00:57.111 + tar --no-same-owner -xf spdk_ca87521f7a945a24d4a88af4aa495b55d2de10da.tar.gz 00:00:59.664 [Pipeline] sh 00:00:59.950 + git -C spdk log --oneline -n5 00:00:59.950 ca87521f7 test/nvme/interrupt: Verify pre|post IO cpu load 00:00:59.950 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:00:59.950 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:00:59.950 4bcab9fb9 correct kick for CQ full case 00:00:59.950 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:00:59.962 [Pipeline] } 00:00:59.978 [Pipeline] // stage 00:00:59.989 [Pipeline] stage 00:00:59.991 [Pipeline] { (Prepare) 00:01:00.011 [Pipeline] writeFile 00:01:00.028 [Pipeline] sh 00:01:00.314 + logger -p user.info -t JENKINS-CI 00:01:00.326 [Pipeline] sh 00:01:00.611 + logger -p user.info -t JENKINS-CI 00:01:00.624 [Pipeline] sh 00:01:00.909 + cat autorun-spdk.conf 00:01:00.910 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.910 SPDK_TEST_NVMF=1 00:01:00.910 SPDK_TEST_NVME_CLI=1 00:01:00.910 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.910 SPDK_TEST_NVMF_NICS=e810 00:01:00.910 SPDK_TEST_VFIOUSER=1 00:01:00.910 SPDK_RUN_UBSAN=1 00:01:00.910 NET_TYPE=phy 00:01:00.917 RUN_NIGHTLY=0 00:01:00.922 [Pipeline] readFile 00:01:00.949 [Pipeline] withEnv 00:01:00.951 [Pipeline] { 00:01:00.964 [Pipeline] sh 00:01:01.253 + set -ex 00:01:01.253 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:01.253 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.253 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.253 ++ SPDK_TEST_NVMF=1 00:01:01.253 ++ SPDK_TEST_NVME_CLI=1 00:01:01.253 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.253 ++ SPDK_TEST_NVMF_NICS=e810 00:01:01.253 ++ SPDK_TEST_VFIOUSER=1 00:01:01.253 ++ SPDK_RUN_UBSAN=1 00:01:01.253 ++ NET_TYPE=phy 00:01:01.253 ++ RUN_NIGHTLY=0 00:01:01.253 + case $SPDK_TEST_NVMF_NICS in 00:01:01.253 + DRIVERS=ice 00:01:01.253 + [[ tcp == \r\d\m\a ]] 00:01:01.253 + [[ -n ice ]] 00:01:01.253 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:01.253 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:04.548 rmmod: ERROR: Module irdma is not currently loaded 00:01:04.548 rmmod: ERROR: Module i40iw is not currently loaded 00:01:04.548 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:04.548 + true 00:01:04.548 + for D in $DRIVERS 00:01:04.548 + sudo modprobe ice 00:01:04.548 + exit 0 00:01:04.558 [Pipeline] } 00:01:04.573 [Pipeline] // withEnv 00:01:04.578 [Pipeline] } 00:01:04.594 [Pipeline] // stage 00:01:04.604 [Pipeline] catchError 00:01:04.606 [Pipeline] { 00:01:04.621 [Pipeline] timeout 00:01:04.622 Timeout set to expire in 1 hr 0 min 00:01:04.624 [Pipeline] { 00:01:04.638 [Pipeline] stage 00:01:04.640 [Pipeline] { (Tests) 00:01:04.654 [Pipeline] sh 00:01:04.942 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:04.942 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:04.942 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:04.942 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:04.942 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:04.942 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:04.942 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:04.942 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:04.942 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:04.942 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:04.942 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:04.942 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:04.942 + source /etc/os-release 00:01:04.942 ++ NAME='Fedora Linux' 00:01:04.942 ++ VERSION='39 (Cloud Edition)' 00:01:04.942 ++ ID=fedora 00:01:04.942 ++ VERSION_ID=39 00:01:04.942 ++ VERSION_CODENAME= 00:01:04.942 ++ PLATFORM_ID=platform:f39 00:01:04.942 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:04.942 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:04.942 ++ LOGO=fedora-logo-icon 00:01:04.942 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:04.942 ++ HOME_URL=https://fedoraproject.org/ 00:01:04.942 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:04.942 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:04.942 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:04.942 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:04.942 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:04.942 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:04.943 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:04.943 ++ SUPPORT_END=2024-11-12 00:01:04.943 ++ VARIANT='Cloud Edition' 00:01:04.943 ++ VARIANT_ID=cloud 00:01:04.943 + uname -a 00:01:04.943 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:04.943 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:07.485 Hugepages 00:01:07.485 node hugesize free / total 00:01:07.485 node0 1048576kB 0 / 0 00:01:07.485 node0 2048kB 1024 / 1024 00:01:07.485 node1 1048576kB 0 / 0 00:01:07.485 node1 2048kB 1024 / 1024 00:01:07.485 00:01:07.485 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:07.485 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:07.485 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:07.485 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:07.485 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:07.485 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:07.485 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:07.485 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:07.485 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:07.485 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:07.485 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:07.485 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:07.485 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:07.485 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:07.485 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:07.485 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:07.485 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:07.485 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:07.485 + rm -f /tmp/spdk-ld-path 00:01:07.485 + source autorun-spdk.conf 00:01:07.485 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.485 ++ SPDK_TEST_NVMF=1 00:01:07.485 ++ SPDK_TEST_NVME_CLI=1 00:01:07.485 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:07.485 ++ SPDK_TEST_NVMF_NICS=e810 00:01:07.485 ++ SPDK_TEST_VFIOUSER=1 00:01:07.485 ++ SPDK_RUN_UBSAN=1 00:01:07.485 ++ NET_TYPE=phy 00:01:07.485 ++ RUN_NIGHTLY=0 00:01:07.485 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:07.485 + [[ -n '' ]] 00:01:07.485 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:07.485 + for M in /var/spdk/build-*-manifest.txt 00:01:07.485 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:07.485 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:07.485 + for M in /var/spdk/build-*-manifest.txt 00:01:07.485 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:07.485 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:07.485 + for M in /var/spdk/build-*-manifest.txt 00:01:07.485 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:07.485 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:07.485 ++ uname 00:01:07.485 + [[ Linux == \L\i\n\u\x ]] 00:01:07.485 + sudo dmesg -T 00:01:07.745 + sudo dmesg --clear 00:01:07.745 + dmesg_pid=1186748 00:01:07.745 + [[ Fedora Linux == FreeBSD ]] 00:01:07.745 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:07.745 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:07.745 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:07.745 + [[ -x /usr/src/fio-static/fio ]] 00:01:07.745 + export FIO_BIN=/usr/src/fio-static/fio 00:01:07.745 + FIO_BIN=/usr/src/fio-static/fio 00:01:07.745 + sudo dmesg -Tw 00:01:07.745 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:07.745 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:07.745 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:07.745 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:07.745 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:07.745 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:07.745 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:07.745 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:07.745 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:07.745 14:10:56 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:07.745 14:10:56 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:07.745 14:10:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.745 14:10:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:07.745 14:10:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:07.745 14:10:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:07.745 14:10:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:07.745 14:10:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:07.745 14:10:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:07.745 14:10:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:07.745 14:10:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:07.745 14:10:56 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:07.745 14:10:56 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:07.745 14:10:56 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:07.745 14:10:56 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:07.745 14:10:56 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:07.745 14:10:56 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:07.745 14:10:56 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:07.745 14:10:56 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:07.745 14:10:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:07.745 14:10:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:07.745 14:10:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:07.745 14:10:56 -- paths/export.sh@5 -- $ export PATH 00:01:07.745 14:10:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:07.745 14:10:56 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:07.745 14:10:56 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:07.745 14:10:56 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731849056.XXXXXX 00:01:07.745 14:10:56 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731849056.Yutr2T 00:01:07.745 14:10:56 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:07.745 14:10:56 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:07.745 14:10:56 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:07.745 14:10:56 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:07.745 14:10:56 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:07.745 14:10:56 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:07.745 14:10:56 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:07.745 14:10:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.745 14:10:56 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:07.745 14:10:56 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:07.745 14:10:56 -- pm/common@17 -- $ local monitor 00:01:07.745 14:10:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:07.745 14:10:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:08.005 14:10:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:08.005 14:10:56 -- pm/common@21 -- $ date +%s 00:01:08.005 14:10:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:08.005 14:10:56 -- pm/common@21 -- $ date +%s 00:01:08.005 14:10:56 -- pm/common@25 -- $ sleep 1 00:01:08.005 14:10:56 -- pm/common@21 -- $ date +%s 00:01:08.005 14:10:56 -- pm/common@21 -- $ date +%s 00:01:08.005 14:10:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731849056 00:01:08.005 14:10:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731849056 00:01:08.005 14:10:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731849056 00:01:08.005 14:10:56 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731849056 00:01:08.005 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731849056_collect-cpu-load.pm.log 00:01:08.005 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731849056_collect-vmstat.pm.log 00:01:08.005 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731849056_collect-cpu-temp.pm.log 00:01:08.005 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731849056_collect-bmc-pm.bmc.pm.log 00:01:08.944 14:10:57 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:08.944 14:10:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:08.944 14:10:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:08.944 14:10:57 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:08.944 14:10:57 -- spdk/autobuild.sh@16 -- $ date -u 00:01:08.944 Sun Nov 17 01:10:57 PM UTC 2024 00:01:08.944 14:10:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:08.944 v25.01-pre-190-gca87521f7 00:01:08.944 14:10:57 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:08.944 14:10:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:08.944 14:10:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:08.944 14:10:57 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:08.944 14:10:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:08.944 14:10:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.944 ************************************ 00:01:08.944 START TEST ubsan 00:01:08.944 ************************************ 00:01:08.944 14:10:58 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:08.944 using ubsan 00:01:08.944 00:01:08.944 real 0m0.000s 00:01:08.944 user 0m0.000s 00:01:08.944 sys 0m0.000s 00:01:08.944 14:10:58 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:08.944 14:10:58 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:08.944 ************************************ 00:01:08.944 END TEST ubsan 00:01:08.944 ************************************ 00:01:08.944 14:10:58 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:08.944 14:10:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:08.944 14:10:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:08.944 14:10:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:08.944 14:10:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:08.944 14:10:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:08.944 14:10:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:08.944 14:10:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:08.944 14:10:58 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:09.204 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:09.204 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:09.462 Using 'verbs' RDMA provider 00:01:22.622 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:34.843 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:34.843 Creating mk/config.mk...done. 00:01:34.843 Creating mk/cc.flags.mk...done. 00:01:34.843 Type 'make' to build. 00:01:34.843 14:11:23 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:34.843 14:11:23 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:34.843 14:11:23 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:34.843 14:11:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.843 ************************************ 00:01:34.843 START TEST make 00:01:34.843 ************************************ 00:01:34.843 14:11:23 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:35.102 make[1]: Nothing to be done for 'all'. 00:01:36.499 The Meson build system 00:01:36.499 Version: 1.5.0 00:01:36.499 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:36.499 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:36.499 Build type: native build 00:01:36.499 Project name: libvfio-user 00:01:36.499 Project version: 0.0.1 00:01:36.499 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:36.499 C linker for the host machine: cc ld.bfd 2.40-14 00:01:36.499 Host machine cpu family: x86_64 00:01:36.499 Host machine cpu: x86_64 00:01:36.499 Run-time dependency threads found: YES 00:01:36.499 Library dl found: YES 00:01:36.499 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:36.499 Run-time dependency json-c found: YES 0.17 00:01:36.499 Run-time dependency cmocka found: YES 1.1.7 00:01:36.499 Program pytest-3 found: NO 00:01:36.499 Program flake8 found: NO 00:01:36.499 Program misspell-fixer found: NO 00:01:36.499 Program restructuredtext-lint found: NO 00:01:36.499 Program valgrind found: YES (/usr/bin/valgrind) 00:01:36.499 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:36.499 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:36.499 Compiler for C supports arguments -Wwrite-strings: YES 00:01:36.499 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:36.499 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:36.499 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:36.499 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:36.499 Build targets in project: 8 00:01:36.499 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:36.499 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:36.499 00:01:36.499 libvfio-user 0.0.1 00:01:36.499 00:01:36.499 User defined options 00:01:36.499 buildtype : debug 00:01:36.499 default_library: shared 00:01:36.499 libdir : /usr/local/lib 00:01:36.499 00:01:36.499 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:36.757 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:36.757 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:36.757 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:36.757 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:36.757 [4/37] Compiling C object samples/null.p/null.c.o 00:01:36.757 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:36.757 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:36.757 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:36.757 [8/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:36.757 [9/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:36.757 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:36.757 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:36.757 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:36.757 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:36.757 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:37.016 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:37.016 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:37.016 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:37.016 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:37.016 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:37.016 [20/37] Compiling C object samples/server.p/server.c.o 00:01:37.016 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:37.016 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:37.016 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:37.016 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:37.016 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:37.016 [26/37] Compiling C object samples/client.p/client.c.o 00:01:37.016 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:37.016 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:37.016 [29/37] Linking target samples/client 00:01:37.016 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:37.016 [31/37] Linking target test/unit_tests 00:01:37.274 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:37.274 [33/37] Linking target samples/server 00:01:37.274 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:37.274 [35/37] Linking target samples/gpio-pci-idio-16 00:01:37.274 [36/37] Linking target samples/null 00:01:37.274 [37/37] Linking target samples/lspci 00:01:37.274 INFO: autodetecting backend as ninja 00:01:37.275 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:37.275 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:37.534 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:37.534 ninja: no work to do. 00:01:42.811 The Meson build system 00:01:42.811 Version: 1.5.0 00:01:42.811 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:42.811 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:42.811 Build type: native build 00:01:42.811 Program cat found: YES (/usr/bin/cat) 00:01:42.811 Project name: DPDK 00:01:42.811 Project version: 24.03.0 00:01:42.811 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:42.811 C linker for the host machine: cc ld.bfd 2.40-14 00:01:42.811 Host machine cpu family: x86_64 00:01:42.811 Host machine cpu: x86_64 00:01:42.811 Message: ## Building in Developer Mode ## 00:01:42.811 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:42.811 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:42.811 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:42.811 Program python3 found: YES (/usr/bin/python3) 00:01:42.811 Program cat found: YES (/usr/bin/cat) 00:01:42.811 Compiler for C supports arguments -march=native: YES 00:01:42.811 Checking for size of "void *" : 8 00:01:42.811 Checking for size of "void *" : 8 (cached) 00:01:42.811 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:42.811 Library m found: YES 00:01:42.811 Library numa found: YES 00:01:42.811 Has header "numaif.h" : YES 00:01:42.811 Library fdt found: NO 00:01:42.811 Library execinfo found: NO 00:01:42.811 Has header "execinfo.h" : YES 00:01:42.811 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:42.811 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:42.811 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:42.811 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:42.811 Run-time dependency openssl found: YES 3.1.1 00:01:42.811 Run-time dependency libpcap found: YES 1.10.4 00:01:42.811 Has header "pcap.h" with dependency libpcap: YES 00:01:42.811 Compiler for C supports arguments -Wcast-qual: YES 00:01:42.811 Compiler for C supports arguments -Wdeprecated: YES 00:01:42.811 Compiler for C supports arguments -Wformat: YES 00:01:42.811 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:42.811 Compiler for C supports arguments -Wformat-security: NO 00:01:42.811 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:42.811 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:42.811 Compiler for C supports arguments -Wnested-externs: YES 00:01:42.811 Compiler for C supports arguments -Wold-style-definition: YES 00:01:42.811 Compiler for C supports arguments -Wpointer-arith: YES 00:01:42.811 Compiler for C supports arguments -Wsign-compare: YES 00:01:42.811 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:42.811 Compiler for C supports arguments -Wundef: YES 00:01:42.811 Compiler for C supports arguments -Wwrite-strings: YES 00:01:42.811 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:42.811 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:42.811 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:42.811 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:42.811 Program objdump found: YES (/usr/bin/objdump) 00:01:42.811 Compiler for C supports arguments -mavx512f: YES 00:01:42.811 Checking if "AVX512 checking" compiles: YES 00:01:42.811 Fetching value of define "__SSE4_2__" : 1 00:01:42.811 Fetching value of define "__AES__" : 1 00:01:42.811 Fetching value of define "__AVX__" : 1 00:01:42.811 Fetching value of define "__AVX2__" : 1 00:01:42.811 Fetching value of define "__AVX512BW__" : 1 00:01:42.811 Fetching value of define "__AVX512CD__" : 1 00:01:42.811 Fetching value of define "__AVX512DQ__" : 1 00:01:42.811 Fetching value of define "__AVX512F__" : 1 00:01:42.811 Fetching value of define "__AVX512VL__" : 1 00:01:42.811 Fetching value of define "__PCLMUL__" : 1 00:01:42.811 Fetching value of define "__RDRND__" : 1 00:01:42.811 Fetching value of define "__RDSEED__" : 1 00:01:42.811 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:42.811 Fetching value of define "__znver1__" : (undefined) 00:01:42.811 Fetching value of define "__znver2__" : (undefined) 00:01:42.811 Fetching value of define "__znver3__" : (undefined) 00:01:42.811 Fetching value of define "__znver4__" : (undefined) 00:01:42.811 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:42.811 Message: lib/log: Defining dependency "log" 00:01:42.811 Message: lib/kvargs: Defining dependency "kvargs" 00:01:42.811 Message: lib/telemetry: Defining dependency "telemetry" 00:01:42.811 Checking for function "getentropy" : NO 00:01:42.811 Message: lib/eal: Defining dependency "eal" 00:01:42.811 Message: lib/ring: Defining dependency "ring" 00:01:42.811 Message: lib/rcu: Defining dependency "rcu" 00:01:42.811 Message: lib/mempool: Defining dependency "mempool" 00:01:42.811 Message: lib/mbuf: Defining dependency "mbuf" 00:01:42.811 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:42.811 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:42.811 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:42.811 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:42.811 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:42.811 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:42.811 Compiler for C supports arguments -mpclmul: YES 00:01:42.811 Compiler for C supports arguments -maes: YES 00:01:42.811 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:42.811 Compiler for C supports arguments -mavx512bw: YES 00:01:42.811 Compiler for C supports arguments -mavx512dq: YES 00:01:42.811 Compiler for C supports arguments -mavx512vl: YES 00:01:42.811 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:42.811 Compiler for C supports arguments -mavx2: YES 00:01:42.811 Compiler for C supports arguments -mavx: YES 00:01:42.811 Message: lib/net: Defining dependency "net" 00:01:42.811 Message: lib/meter: Defining dependency "meter" 00:01:42.811 Message: lib/ethdev: Defining dependency "ethdev" 00:01:42.811 Message: lib/pci: Defining dependency "pci" 00:01:42.811 Message: lib/cmdline: Defining dependency "cmdline" 00:01:42.811 Message: lib/hash: Defining dependency "hash" 00:01:42.811 Message: lib/timer: Defining dependency "timer" 00:01:42.811 Message: lib/compressdev: Defining dependency "compressdev" 00:01:42.811 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:42.811 Message: lib/dmadev: Defining dependency "dmadev" 00:01:42.811 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:42.811 Message: lib/power: Defining dependency "power" 00:01:42.811 Message: lib/reorder: Defining dependency "reorder" 00:01:42.811 Message: lib/security: Defining dependency "security" 00:01:42.811 Has header "linux/userfaultfd.h" : YES 00:01:42.811 Has header "linux/vduse.h" : YES 00:01:42.811 Message: lib/vhost: Defining dependency "vhost" 00:01:42.811 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:42.811 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:42.811 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:42.811 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:42.811 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:42.811 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:42.811 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:42.811 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:42.812 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:42.812 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:42.812 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:42.812 Configuring doxy-api-html.conf using configuration 00:01:42.812 Configuring doxy-api-man.conf using configuration 00:01:42.812 Program mandb found: YES (/usr/bin/mandb) 00:01:42.812 Program sphinx-build found: NO 00:01:42.812 Configuring rte_build_config.h using configuration 00:01:42.812 Message: 00:01:42.812 ================= 00:01:42.812 Applications Enabled 00:01:42.812 ================= 00:01:42.812 00:01:42.812 apps: 00:01:42.812 00:01:42.812 00:01:42.812 Message: 00:01:42.812 ================= 00:01:42.812 Libraries Enabled 00:01:42.812 ================= 00:01:42.812 00:01:42.812 libs: 00:01:42.812 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:42.812 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:42.812 cryptodev, dmadev, power, reorder, security, vhost, 00:01:42.812 00:01:42.812 Message: 00:01:42.812 =============== 00:01:42.812 Drivers Enabled 00:01:42.812 =============== 00:01:42.812 00:01:42.812 common: 00:01:42.812 00:01:42.812 bus: 00:01:42.812 pci, vdev, 00:01:42.812 mempool: 00:01:42.812 ring, 00:01:42.812 dma: 00:01:42.812 00:01:42.812 net: 00:01:42.812 00:01:42.812 crypto: 00:01:42.812 00:01:42.812 compress: 00:01:42.812 00:01:42.812 vdpa: 00:01:42.812 00:01:42.812 00:01:42.812 Message: 00:01:42.812 ================= 00:01:42.812 Content Skipped 00:01:42.812 ================= 00:01:42.812 00:01:42.812 apps: 00:01:42.812 dumpcap: explicitly disabled via build config 00:01:42.812 graph: explicitly disabled via build config 00:01:42.812 pdump: explicitly disabled via build config 00:01:42.812 proc-info: explicitly disabled via build config 00:01:42.812 test-acl: explicitly disabled via build config 00:01:42.812 test-bbdev: explicitly disabled via build config 00:01:42.812 test-cmdline: explicitly disabled via build config 00:01:42.812 test-compress-perf: explicitly disabled via build config 00:01:42.812 test-crypto-perf: explicitly disabled via build config 00:01:42.812 test-dma-perf: explicitly disabled via build config 00:01:42.812 test-eventdev: explicitly disabled via build config 00:01:42.812 test-fib: explicitly disabled via build config 00:01:42.812 test-flow-perf: explicitly disabled via build config 00:01:42.812 test-gpudev: explicitly disabled via build config 00:01:42.812 test-mldev: explicitly disabled via build config 00:01:42.812 test-pipeline: explicitly disabled via build config 00:01:42.812 test-pmd: explicitly disabled via build config 00:01:42.812 test-regex: explicitly disabled via build config 00:01:42.812 test-sad: explicitly disabled via build config 00:01:42.812 test-security-perf: explicitly disabled via build config 00:01:42.812 00:01:42.812 libs: 00:01:42.812 argparse: explicitly disabled via build config 00:01:42.812 metrics: explicitly disabled via build config 00:01:42.812 acl: explicitly disabled via build config 00:01:42.812 bbdev: explicitly disabled via build config 00:01:42.812 bitratestats: explicitly disabled via build config 00:01:42.812 bpf: explicitly disabled via build config 00:01:42.812 cfgfile: explicitly disabled via build config 00:01:42.812 distributor: explicitly disabled via build config 00:01:42.812 efd: explicitly disabled via build config 00:01:42.812 eventdev: explicitly disabled via build config 00:01:42.812 dispatcher: explicitly disabled via build config 00:01:42.812 gpudev: explicitly disabled via build config 00:01:42.812 gro: explicitly disabled via build config 00:01:42.812 gso: explicitly disabled via build config 00:01:42.812 ip_frag: explicitly disabled via build config 00:01:42.812 jobstats: explicitly disabled via build config 00:01:42.812 latencystats: explicitly disabled via build config 00:01:42.812 lpm: explicitly disabled via build config 00:01:42.812 member: explicitly disabled via build config 00:01:42.812 pcapng: explicitly disabled via build config 00:01:42.812 rawdev: explicitly disabled via build config 00:01:42.812 regexdev: explicitly disabled via build config 00:01:42.812 mldev: explicitly disabled via build config 00:01:42.812 rib: explicitly disabled via build config 00:01:42.812 sched: explicitly disabled via build config 00:01:42.812 stack: explicitly disabled via build config 00:01:42.812 ipsec: explicitly disabled via build config 00:01:42.812 pdcp: explicitly disabled via build config 00:01:42.812 fib: explicitly disabled via build config 00:01:42.812 port: explicitly disabled via build config 00:01:42.812 pdump: explicitly disabled via build config 00:01:42.812 table: explicitly disabled via build config 00:01:42.812 pipeline: explicitly disabled via build config 00:01:42.812 graph: explicitly disabled via build config 00:01:42.812 node: explicitly disabled via build config 00:01:42.812 00:01:42.812 drivers: 00:01:42.812 common/cpt: not in enabled drivers build config 00:01:42.812 common/dpaax: not in enabled drivers build config 00:01:42.812 common/iavf: not in enabled drivers build config 00:01:42.812 common/idpf: not in enabled drivers build config 00:01:42.812 common/ionic: not in enabled drivers build config 00:01:42.812 common/mvep: not in enabled drivers build config 00:01:42.812 common/octeontx: not in enabled drivers build config 00:01:42.812 bus/auxiliary: not in enabled drivers build config 00:01:42.812 bus/cdx: not in enabled drivers build config 00:01:42.812 bus/dpaa: not in enabled drivers build config 00:01:42.812 bus/fslmc: not in enabled drivers build config 00:01:42.812 bus/ifpga: not in enabled drivers build config 00:01:42.812 bus/platform: not in enabled drivers build config 00:01:42.812 bus/uacce: not in enabled drivers build config 00:01:42.812 bus/vmbus: not in enabled drivers build config 00:01:42.812 common/cnxk: not in enabled drivers build config 00:01:42.812 common/mlx5: not in enabled drivers build config 00:01:42.812 common/nfp: not in enabled drivers build config 00:01:42.812 common/nitrox: not in enabled drivers build config 00:01:42.812 common/qat: not in enabled drivers build config 00:01:42.812 common/sfc_efx: not in enabled drivers build config 00:01:42.812 mempool/bucket: not in enabled drivers build config 00:01:42.812 mempool/cnxk: not in enabled drivers build config 00:01:42.812 mempool/dpaa: not in enabled drivers build config 00:01:42.812 mempool/dpaa2: not in enabled drivers build config 00:01:42.812 mempool/octeontx: not in enabled drivers build config 00:01:42.812 mempool/stack: not in enabled drivers build config 00:01:42.812 dma/cnxk: not in enabled drivers build config 00:01:42.812 dma/dpaa: not in enabled drivers build config 00:01:42.812 dma/dpaa2: not in enabled drivers build config 00:01:42.812 dma/hisilicon: not in enabled drivers build config 00:01:42.812 dma/idxd: not in enabled drivers build config 00:01:42.812 dma/ioat: not in enabled drivers build config 00:01:42.812 dma/skeleton: not in enabled drivers build config 00:01:42.812 net/af_packet: not in enabled drivers build config 00:01:42.812 net/af_xdp: not in enabled drivers build config 00:01:42.812 net/ark: not in enabled drivers build config 00:01:42.812 net/atlantic: not in enabled drivers build config 00:01:42.812 net/avp: not in enabled drivers build config 00:01:42.812 net/axgbe: not in enabled drivers build config 00:01:42.812 net/bnx2x: not in enabled drivers build config 00:01:42.812 net/bnxt: not in enabled drivers build config 00:01:42.812 net/bonding: not in enabled drivers build config 00:01:42.812 net/cnxk: not in enabled drivers build config 00:01:42.812 net/cpfl: not in enabled drivers build config 00:01:42.812 net/cxgbe: not in enabled drivers build config 00:01:42.812 net/dpaa: not in enabled drivers build config 00:01:42.812 net/dpaa2: not in enabled drivers build config 00:01:42.812 net/e1000: not in enabled drivers build config 00:01:42.812 net/ena: not in enabled drivers build config 00:01:42.812 net/enetc: not in enabled drivers build config 00:01:42.812 net/enetfec: not in enabled drivers build config 00:01:42.812 net/enic: not in enabled drivers build config 00:01:42.812 net/failsafe: not in enabled drivers build config 00:01:42.812 net/fm10k: not in enabled drivers build config 00:01:42.812 net/gve: not in enabled drivers build config 00:01:42.812 net/hinic: not in enabled drivers build config 00:01:42.812 net/hns3: not in enabled drivers build config 00:01:42.812 net/i40e: not in enabled drivers build config 00:01:42.812 net/iavf: not in enabled drivers build config 00:01:42.812 net/ice: not in enabled drivers build config 00:01:42.812 net/idpf: not in enabled drivers build config 00:01:42.812 net/igc: not in enabled drivers build config 00:01:42.812 net/ionic: not in enabled drivers build config 00:01:42.812 net/ipn3ke: not in enabled drivers build config 00:01:42.812 net/ixgbe: not in enabled drivers build config 00:01:42.812 net/mana: not in enabled drivers build config 00:01:42.812 net/memif: not in enabled drivers build config 00:01:42.812 net/mlx4: not in enabled drivers build config 00:01:42.812 net/mlx5: not in enabled drivers build config 00:01:42.812 net/mvneta: not in enabled drivers build config 00:01:42.812 net/mvpp2: not in enabled drivers build config 00:01:42.812 net/netvsc: not in enabled drivers build config 00:01:42.812 net/nfb: not in enabled drivers build config 00:01:42.812 net/nfp: not in enabled drivers build config 00:01:42.812 net/ngbe: not in enabled drivers build config 00:01:42.812 net/null: not in enabled drivers build config 00:01:42.812 net/octeontx: not in enabled drivers build config 00:01:42.812 net/octeon_ep: not in enabled drivers build config 00:01:42.812 net/pcap: not in enabled drivers build config 00:01:42.812 net/pfe: not in enabled drivers build config 00:01:42.813 net/qede: not in enabled drivers build config 00:01:42.813 net/ring: not in enabled drivers build config 00:01:42.813 net/sfc: not in enabled drivers build config 00:01:42.813 net/softnic: not in enabled drivers build config 00:01:42.813 net/tap: not in enabled drivers build config 00:01:42.813 net/thunderx: not in enabled drivers build config 00:01:42.813 net/txgbe: not in enabled drivers build config 00:01:42.813 net/vdev_netvsc: not in enabled drivers build config 00:01:42.813 net/vhost: not in enabled drivers build config 00:01:42.813 net/virtio: not in enabled drivers build config 00:01:42.813 net/vmxnet3: not in enabled drivers build config 00:01:42.813 raw/*: missing internal dependency, "rawdev" 00:01:42.813 crypto/armv8: not in enabled drivers build config 00:01:42.813 crypto/bcmfs: not in enabled drivers build config 00:01:42.813 crypto/caam_jr: not in enabled drivers build config 00:01:42.813 crypto/ccp: not in enabled drivers build config 00:01:42.813 crypto/cnxk: not in enabled drivers build config 00:01:42.813 crypto/dpaa_sec: not in enabled drivers build config 00:01:42.813 crypto/dpaa2_sec: not in enabled drivers build config 00:01:42.813 crypto/ipsec_mb: not in enabled drivers build config 00:01:42.813 crypto/mlx5: not in enabled drivers build config 00:01:42.813 crypto/mvsam: not in enabled drivers build config 00:01:42.813 crypto/nitrox: not in enabled drivers build config 00:01:42.813 crypto/null: not in enabled drivers build config 00:01:42.813 crypto/octeontx: not in enabled drivers build config 00:01:42.813 crypto/openssl: not in enabled drivers build config 00:01:42.813 crypto/scheduler: not in enabled drivers build config 00:01:42.813 crypto/uadk: not in enabled drivers build config 00:01:42.813 crypto/virtio: not in enabled drivers build config 00:01:42.813 compress/isal: not in enabled drivers build config 00:01:42.813 compress/mlx5: not in enabled drivers build config 00:01:42.813 compress/nitrox: not in enabled drivers build config 00:01:42.813 compress/octeontx: not in enabled drivers build config 00:01:42.813 compress/zlib: not in enabled drivers build config 00:01:42.813 regex/*: missing internal dependency, "regexdev" 00:01:42.813 ml/*: missing internal dependency, "mldev" 00:01:42.813 vdpa/ifc: not in enabled drivers build config 00:01:42.813 vdpa/mlx5: not in enabled drivers build config 00:01:42.813 vdpa/nfp: not in enabled drivers build config 00:01:42.813 vdpa/sfc: not in enabled drivers build config 00:01:42.813 event/*: missing internal dependency, "eventdev" 00:01:42.813 baseband/*: missing internal dependency, "bbdev" 00:01:42.813 gpu/*: missing internal dependency, "gpudev" 00:01:42.813 00:01:42.813 00:01:43.072 Build targets in project: 85 00:01:43.072 00:01:43.072 DPDK 24.03.0 00:01:43.072 00:01:43.072 User defined options 00:01:43.072 buildtype : debug 00:01:43.072 default_library : shared 00:01:43.072 libdir : lib 00:01:43.072 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:43.072 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:43.072 c_link_args : 00:01:43.072 cpu_instruction_set: native 00:01:43.072 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:43.072 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:43.072 enable_docs : false 00:01:43.072 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:43.072 enable_kmods : false 00:01:43.072 max_lcores : 128 00:01:43.072 tests : false 00:01:43.072 00:01:43.072 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:43.647 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:43.647 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:43.647 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:43.647 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:43.647 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:43.647 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:43.647 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:43.647 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:43.647 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:43.911 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:43.911 [10/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:43.911 [11/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:43.911 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:43.911 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:43.911 [14/268] Linking static target lib/librte_kvargs.a 00:01:43.911 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:43.911 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:43.911 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:43.911 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:43.911 [19/268] Linking static target lib/librte_log.a 00:01:43.911 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:43.911 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:43.911 [22/268] Linking static target lib/librte_pci.a 00:01:44.172 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:44.172 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:44.172 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:44.172 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:44.172 [27/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:44.172 [28/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:44.172 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:44.172 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:44.172 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:44.172 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:44.172 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:44.172 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:44.172 [35/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:44.172 [36/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:44.172 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:44.172 [38/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:44.172 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:44.172 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:44.172 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:44.172 [42/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:44.172 [43/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:44.172 [44/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:44.172 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:44.172 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:44.172 [47/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:44.172 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:44.172 [49/268] Linking static target lib/librte_meter.a 00:01:44.172 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:44.172 [51/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:44.172 [52/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:44.172 [53/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:44.172 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:44.172 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:44.172 [56/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:44.172 [57/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:44.172 [58/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:44.172 [59/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:44.172 [60/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:44.172 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:44.431 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:44.431 [63/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:44.431 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:44.431 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:44.431 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:44.431 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:44.431 [68/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:44.431 [69/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:44.431 [70/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:44.431 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:44.431 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:44.431 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:44.431 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:44.431 [75/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:44.431 [76/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:44.431 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:44.431 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:44.431 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:44.431 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:44.431 [81/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:44.431 [82/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:44.431 [83/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:44.431 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:44.431 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:44.431 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:44.431 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:44.431 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:44.431 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:44.431 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:44.431 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:44.431 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:44.431 [93/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.431 [94/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:44.431 [95/268] Linking static target lib/librte_ring.a 00:01:44.431 [96/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:44.431 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:44.431 [98/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:44.431 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:44.431 [100/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.431 [101/268] Linking static target lib/librte_telemetry.a 00:01:44.431 [102/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:44.431 [103/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:44.431 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:44.431 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:44.431 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:44.431 [107/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:44.431 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:44.431 [109/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:44.431 [110/268] Linking static target lib/librte_net.a 00:01:44.431 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:44.431 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:44.431 [113/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:44.431 [114/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:44.431 [115/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:44.431 [116/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:44.431 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:44.431 [118/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:44.431 [119/268] Linking static target lib/librte_mempool.a 00:01:44.431 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:44.431 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:44.431 [122/268] Linking static target lib/librte_eal.a 00:01:44.431 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:44.431 [124/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:44.431 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:44.431 [126/268] Linking static target lib/librte_cmdline.a 00:01:44.431 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:44.431 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:44.431 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:44.431 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:44.431 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:44.431 [132/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:44.431 [133/268] Linking static target lib/librte_rcu.a 00:01:44.690 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.690 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:44.690 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:44.690 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:44.690 [138/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.690 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:44.690 [140/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:44.690 [141/268] Linking target lib/librte_log.so.24.1 00:01:44.690 [142/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:44.690 [143/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:44.690 [144/268] Linking static target lib/librte_timer.a 00:01:44.690 [145/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:44.690 [146/268] Linking static target lib/librte_mbuf.a 00:01:44.690 [147/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.690 [148/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:44.690 [149/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.690 [150/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:44.691 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:44.691 [152/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:44.691 [153/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:44.691 [154/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:44.691 [155/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:44.691 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:44.691 [157/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:44.691 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:44.691 [159/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:44.691 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:44.691 [161/268] Linking static target lib/librte_dmadev.a 00:01:44.691 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:44.691 [163/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:44.691 [164/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:44.691 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:44.691 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:44.691 [167/268] Linking target lib/librte_kvargs.so.24.1 00:01:44.691 [168/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:44.691 [169/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:44.691 [170/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:44.951 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:44.951 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:44.951 [173/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.951 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:44.951 [175/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:44.951 [176/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.951 [177/268] Linking target lib/librte_telemetry.so.24.1 00:01:44.951 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:44.951 [179/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:44.951 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:44.951 [181/268] Linking static target lib/librte_compressdev.a 00:01:44.951 [182/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:44.951 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:44.951 [184/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:44.951 [185/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:44.951 [186/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:44.951 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:44.951 [188/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:44.951 [189/268] Linking static target drivers/librte_bus_vdev.a 00:01:44.951 [190/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:44.951 [191/268] Linking static target lib/librte_power.a 00:01:44.951 [192/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:44.951 [193/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:44.951 [194/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:44.951 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:44.951 [196/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:44.951 [197/268] Linking static target lib/librte_reorder.a 00:01:44.951 [198/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:44.951 [199/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:44.951 [200/268] Linking static target lib/librte_hash.a 00:01:44.951 [201/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:44.951 [202/268] Linking static target drivers/librte_mempool_ring.a 00:01:44.951 [203/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:44.951 [204/268] Linking static target lib/librte_security.a 00:01:45.210 [205/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.210 [206/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:45.210 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:45.210 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:45.210 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:45.210 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:45.210 [211/268] Linking static target drivers/librte_bus_pci.a 00:01:45.210 [212/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:45.210 [213/268] Linking static target lib/librte_cryptodev.a 00:01:45.210 [214/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.470 [215/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.470 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:45.470 [217/268] Linking static target lib/librte_ethdev.a 00:01:45.470 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.470 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.470 [220/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.470 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.730 [222/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.730 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.730 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:45.989 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.989 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.989 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.927 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:46.927 [229/268] Linking static target lib/librte_vhost.a 00:01:47.186 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.566 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.844 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.783 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.783 [234/268] Linking target lib/librte_eal.so.24.1 00:01:54.783 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:54.783 [236/268] Linking target lib/librte_ring.so.24.1 00:01:54.783 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:54.783 [238/268] Linking target lib/librte_pci.so.24.1 00:01:54.783 [239/268] Linking target lib/librte_dmadev.so.24.1 00:01:54.783 [240/268] Linking target lib/librte_timer.so.24.1 00:01:54.783 [241/268] Linking target lib/librte_meter.so.24.1 00:01:55.043 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:55.043 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:55.043 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:55.043 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:55.043 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:55.043 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:55.043 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:55.043 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:55.043 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:55.043 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:55.043 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:55.302 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:55.302 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:55.302 [255/268] Linking target lib/librte_reorder.so.24.1 00:01:55.302 [256/268] Linking target lib/librte_net.so.24.1 00:01:55.302 [257/268] Linking target lib/librte_compressdev.so.24.1 00:01:55.302 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:55.561 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:55.561 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:55.561 [261/268] Linking target lib/librte_cmdline.so.24.1 00:01:55.561 [262/268] Linking target lib/librte_hash.so.24.1 00:01:55.561 [263/268] Linking target lib/librte_security.so.24.1 00:01:55.561 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:55.561 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:55.821 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:55.821 [267/268] Linking target lib/librte_power.so.24.1 00:01:55.821 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:55.821 INFO: autodetecting backend as ninja 00:01:55.821 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:05.811 CC lib/ut_mock/mock.o 00:02:05.811 CC lib/ut/ut.o 00:02:05.811 CC lib/log/log.o 00:02:05.811 CC lib/log/log_flags.o 00:02:05.811 CC lib/log/log_deprecated.o 00:02:06.071 LIB libspdk_log.a 00:02:06.071 LIB libspdk_ut.a 00:02:06.071 LIB libspdk_ut_mock.a 00:02:06.071 SO libspdk_log.so.7.1 00:02:06.071 SO libspdk_ut.so.2.0 00:02:06.071 SO libspdk_ut_mock.so.6.0 00:02:06.071 SYMLINK libspdk_log.so 00:02:06.071 SYMLINK libspdk_ut_mock.so 00:02:06.071 SYMLINK libspdk_ut.so 00:02:06.330 CXX lib/trace_parser/trace.o 00:02:06.330 CC lib/util/base64.o 00:02:06.330 CC lib/util/bit_array.o 00:02:06.330 CC lib/util/cpuset.o 00:02:06.330 CC lib/util/crc16.o 00:02:06.330 CC lib/util/crc32.o 00:02:06.330 CC lib/util/crc32c.o 00:02:06.330 CC lib/util/crc32_ieee.o 00:02:06.330 CC lib/util/crc64.o 00:02:06.330 CC lib/ioat/ioat.o 00:02:06.330 CC lib/util/dif.o 00:02:06.330 CC lib/util/fd.o 00:02:06.330 CC lib/dma/dma.o 00:02:06.330 CC lib/util/fd_group.o 00:02:06.330 CC lib/util/file.o 00:02:06.330 CC lib/util/hexlify.o 00:02:06.330 CC lib/util/iov.o 00:02:06.330 CC lib/util/math.o 00:02:06.330 CC lib/util/net.o 00:02:06.330 CC lib/util/pipe.o 00:02:06.330 CC lib/util/strerror_tls.o 00:02:06.330 CC lib/util/string.o 00:02:06.330 CC lib/util/uuid.o 00:02:06.330 CC lib/util/xor.o 00:02:06.330 CC lib/util/zipf.o 00:02:06.330 CC lib/util/md5.o 00:02:06.589 CC lib/vfio_user/host/vfio_user_pci.o 00:02:06.589 CC lib/vfio_user/host/vfio_user.o 00:02:06.589 LIB libspdk_dma.a 00:02:06.589 SO libspdk_dma.so.5.0 00:02:06.589 LIB libspdk_ioat.a 00:02:06.847 SO libspdk_ioat.so.7.0 00:02:06.847 SYMLINK libspdk_dma.so 00:02:06.847 SYMLINK libspdk_ioat.so 00:02:06.847 LIB libspdk_vfio_user.a 00:02:06.847 SO libspdk_vfio_user.so.5.0 00:02:06.847 LIB libspdk_util.a 00:02:06.847 SYMLINK libspdk_vfio_user.so 00:02:06.847 SO libspdk_util.so.10.1 00:02:07.106 SYMLINK libspdk_util.so 00:02:07.106 LIB libspdk_trace_parser.a 00:02:07.106 SO libspdk_trace_parser.so.6.0 00:02:07.367 SYMLINK libspdk_trace_parser.so 00:02:07.367 CC lib/env_dpdk/env.o 00:02:07.367 CC lib/env_dpdk/memory.o 00:02:07.367 CC lib/rdma_utils/rdma_utils.o 00:02:07.367 CC lib/env_dpdk/pci.o 00:02:07.367 CC lib/env_dpdk/init.o 00:02:07.367 CC lib/idxd/idxd.o 00:02:07.367 CC lib/env_dpdk/threads.o 00:02:07.367 CC lib/idxd/idxd_user.o 00:02:07.367 CC lib/env_dpdk/pci_ioat.o 00:02:07.367 CC lib/vmd/vmd.o 00:02:07.367 CC lib/vmd/led.o 00:02:07.367 CC lib/idxd/idxd_kernel.o 00:02:07.367 CC lib/env_dpdk/pci_virtio.o 00:02:07.367 CC lib/json/json_parse.o 00:02:07.367 CC lib/env_dpdk/pci_vmd.o 00:02:07.367 CC lib/conf/conf.o 00:02:07.367 CC lib/env_dpdk/pci_idxd.o 00:02:07.367 CC lib/json/json_util.o 00:02:07.367 CC lib/json/json_write.o 00:02:07.367 CC lib/env_dpdk/pci_event.o 00:02:07.367 CC lib/env_dpdk/sigbus_handler.o 00:02:07.367 CC lib/env_dpdk/pci_dpdk.o 00:02:07.367 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:07.367 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:07.626 LIB libspdk_conf.a 00:02:07.626 LIB libspdk_rdma_utils.a 00:02:07.626 LIB libspdk_json.a 00:02:07.626 SO libspdk_conf.so.6.0 00:02:07.626 SO libspdk_rdma_utils.so.1.0 00:02:07.626 SO libspdk_json.so.6.0 00:02:07.626 SYMLINK libspdk_conf.so 00:02:07.885 SYMLINK libspdk_rdma_utils.so 00:02:07.885 SYMLINK libspdk_json.so 00:02:07.885 LIB libspdk_idxd.a 00:02:07.885 LIB libspdk_vmd.a 00:02:07.885 SO libspdk_idxd.so.12.1 00:02:07.885 SO libspdk_vmd.so.6.0 00:02:07.885 SYMLINK libspdk_idxd.so 00:02:08.145 SYMLINK libspdk_vmd.so 00:02:08.145 CC lib/rdma_provider/common.o 00:02:08.145 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:08.145 CC lib/jsonrpc/jsonrpc_server.o 00:02:08.145 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:08.145 CC lib/jsonrpc/jsonrpc_client.o 00:02:08.145 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:08.145 LIB libspdk_rdma_provider.a 00:02:08.404 SO libspdk_rdma_provider.so.7.0 00:02:08.404 LIB libspdk_jsonrpc.a 00:02:08.404 SO libspdk_jsonrpc.so.6.0 00:02:08.404 SYMLINK libspdk_rdma_provider.so 00:02:08.404 SYMLINK libspdk_jsonrpc.so 00:02:08.404 LIB libspdk_env_dpdk.a 00:02:08.404 SO libspdk_env_dpdk.so.15.1 00:02:08.666 SYMLINK libspdk_env_dpdk.so 00:02:08.666 CC lib/rpc/rpc.o 00:02:08.926 LIB libspdk_rpc.a 00:02:08.926 SO libspdk_rpc.so.6.0 00:02:08.926 SYMLINK libspdk_rpc.so 00:02:09.186 CC lib/trace/trace.o 00:02:09.186 CC lib/trace/trace_flags.o 00:02:09.186 CC lib/trace/trace_rpc.o 00:02:09.186 CC lib/notify/notify.o 00:02:09.186 CC lib/notify/notify_rpc.o 00:02:09.186 CC lib/keyring/keyring.o 00:02:09.186 CC lib/keyring/keyring_rpc.o 00:02:09.446 LIB libspdk_notify.a 00:02:09.446 SO libspdk_notify.so.6.0 00:02:09.446 LIB libspdk_trace.a 00:02:09.446 LIB libspdk_keyring.a 00:02:09.446 SO libspdk_trace.so.11.0 00:02:09.446 SYMLINK libspdk_notify.so 00:02:09.446 SO libspdk_keyring.so.2.0 00:02:09.706 SYMLINK libspdk_trace.so 00:02:09.706 SYMLINK libspdk_keyring.so 00:02:09.966 CC lib/thread/thread.o 00:02:09.966 CC lib/thread/iobuf.o 00:02:09.966 CC lib/sock/sock.o 00:02:09.966 CC lib/sock/sock_rpc.o 00:02:10.225 LIB libspdk_sock.a 00:02:10.225 SO libspdk_sock.so.10.0 00:02:10.225 SYMLINK libspdk_sock.so 00:02:10.794 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:10.794 CC lib/nvme/nvme_ctrlr.o 00:02:10.794 CC lib/nvme/nvme_fabric.o 00:02:10.794 CC lib/nvme/nvme_ns_cmd.o 00:02:10.794 CC lib/nvme/nvme_ns.o 00:02:10.794 CC lib/nvme/nvme_pcie_common.o 00:02:10.794 CC lib/nvme/nvme_pcie.o 00:02:10.794 CC lib/nvme/nvme_qpair.o 00:02:10.794 CC lib/nvme/nvme.o 00:02:10.794 CC lib/nvme/nvme_quirks.o 00:02:10.794 CC lib/nvme/nvme_transport.o 00:02:10.794 CC lib/nvme/nvme_discovery.o 00:02:10.794 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:10.794 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:10.794 CC lib/nvme/nvme_tcp.o 00:02:10.794 CC lib/nvme/nvme_opal.o 00:02:10.794 CC lib/nvme/nvme_io_msg.o 00:02:10.794 CC lib/nvme/nvme_poll_group.o 00:02:10.794 CC lib/nvme/nvme_zns.o 00:02:10.794 CC lib/nvme/nvme_stubs.o 00:02:10.794 CC lib/nvme/nvme_auth.o 00:02:10.794 CC lib/nvme/nvme_cuse.o 00:02:10.794 CC lib/nvme/nvme_vfio_user.o 00:02:10.794 CC lib/nvme/nvme_rdma.o 00:02:11.053 LIB libspdk_thread.a 00:02:11.053 SO libspdk_thread.so.11.0 00:02:11.053 SYMLINK libspdk_thread.so 00:02:11.312 CC lib/accel/accel_rpc.o 00:02:11.312 CC lib/accel/accel_sw.o 00:02:11.312 CC lib/accel/accel.o 00:02:11.312 CC lib/fsdev/fsdev.o 00:02:11.312 CC lib/fsdev/fsdev_rpc.o 00:02:11.312 CC lib/fsdev/fsdev_io.o 00:02:11.312 CC lib/virtio/virtio.o 00:02:11.312 CC lib/virtio/virtio_vfio_user.o 00:02:11.312 CC lib/virtio/virtio_vhost_user.o 00:02:11.312 CC lib/virtio/virtio_pci.o 00:02:11.312 CC lib/vfu_tgt/tgt_endpoint.o 00:02:11.312 CC lib/vfu_tgt/tgt_rpc.o 00:02:11.312 CC lib/init/json_config.o 00:02:11.312 CC lib/init/subsystem.o 00:02:11.312 CC lib/init/subsystem_rpc.o 00:02:11.312 CC lib/blob/blobstore.o 00:02:11.312 CC lib/init/rpc.o 00:02:11.312 CC lib/blob/request.o 00:02:11.312 CC lib/blob/zeroes.o 00:02:11.312 CC lib/blob/blob_bs_dev.o 00:02:11.571 LIB libspdk_init.a 00:02:11.571 SO libspdk_init.so.6.0 00:02:11.571 LIB libspdk_virtio.a 00:02:11.571 LIB libspdk_vfu_tgt.a 00:02:11.571 SYMLINK libspdk_init.so 00:02:11.571 SO libspdk_virtio.so.7.0 00:02:11.831 SO libspdk_vfu_tgt.so.3.0 00:02:11.831 SYMLINK libspdk_virtio.so 00:02:11.831 SYMLINK libspdk_vfu_tgt.so 00:02:11.831 LIB libspdk_fsdev.a 00:02:11.831 SO libspdk_fsdev.so.2.0 00:02:12.090 SYMLINK libspdk_fsdev.so 00:02:12.090 CC lib/event/app.o 00:02:12.090 CC lib/event/reactor.o 00:02:12.090 CC lib/event/log_rpc.o 00:02:12.090 CC lib/event/app_rpc.o 00:02:12.090 CC lib/event/scheduler_static.o 00:02:12.090 LIB libspdk_accel.a 00:02:12.090 SO libspdk_accel.so.16.0 00:02:12.350 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:12.350 SYMLINK libspdk_accel.so 00:02:12.350 LIB libspdk_nvme.a 00:02:12.350 LIB libspdk_event.a 00:02:12.350 SO libspdk_event.so.14.0 00:02:12.350 SO libspdk_nvme.so.15.0 00:02:12.350 SYMLINK libspdk_event.so 00:02:12.609 CC lib/bdev/bdev.o 00:02:12.609 CC lib/bdev/bdev_rpc.o 00:02:12.609 CC lib/bdev/bdev_zone.o 00:02:12.609 CC lib/bdev/part.o 00:02:12.609 CC lib/bdev/scsi_nvme.o 00:02:12.609 SYMLINK libspdk_nvme.so 00:02:12.868 LIB libspdk_fuse_dispatcher.a 00:02:12.868 SO libspdk_fuse_dispatcher.so.1.0 00:02:12.868 SYMLINK libspdk_fuse_dispatcher.so 00:02:13.808 LIB libspdk_blob.a 00:02:13.808 SO libspdk_blob.so.11.0 00:02:13.809 SYMLINK libspdk_blob.so 00:02:14.068 CC lib/blobfs/blobfs.o 00:02:14.068 CC lib/blobfs/tree.o 00:02:14.068 CC lib/lvol/lvol.o 00:02:14.327 LIB libspdk_bdev.a 00:02:14.586 SO libspdk_bdev.so.17.0 00:02:14.586 SYMLINK libspdk_bdev.so 00:02:14.586 LIB libspdk_blobfs.a 00:02:14.586 SO libspdk_blobfs.so.10.0 00:02:14.586 LIB libspdk_lvol.a 00:02:14.846 SYMLINK libspdk_blobfs.so 00:02:14.846 SO libspdk_lvol.so.10.0 00:02:14.846 SYMLINK libspdk_lvol.so 00:02:14.846 CC lib/ublk/ublk.o 00:02:14.846 CC lib/ublk/ublk_rpc.o 00:02:14.846 CC lib/nbd/nbd.o 00:02:14.846 CC lib/scsi/dev.o 00:02:14.846 CC lib/nbd/nbd_rpc.o 00:02:14.846 CC lib/scsi/lun.o 00:02:14.846 CC lib/nvmf/ctrlr.o 00:02:14.846 CC lib/scsi/port.o 00:02:14.846 CC lib/nvmf/ctrlr_discovery.o 00:02:14.846 CC lib/nvmf/ctrlr_bdev.o 00:02:14.846 CC lib/scsi/scsi.o 00:02:14.846 CC lib/nvmf/subsystem.o 00:02:14.846 CC lib/scsi/scsi_bdev.o 00:02:14.846 CC lib/nvmf/nvmf.o 00:02:14.846 CC lib/scsi/scsi_pr.o 00:02:14.846 CC lib/nvmf/nvmf_rpc.o 00:02:14.846 CC lib/scsi/scsi_rpc.o 00:02:14.846 CC lib/nvmf/transport.o 00:02:14.846 CC lib/ftl/ftl_core.o 00:02:14.846 CC lib/scsi/task.o 00:02:14.846 CC lib/nvmf/tcp.o 00:02:14.846 CC lib/ftl/ftl_init.o 00:02:14.846 CC lib/nvmf/stubs.o 00:02:14.846 CC lib/ftl/ftl_layout.o 00:02:14.846 CC lib/ftl/ftl_debug.o 00:02:14.846 CC lib/nvmf/mdns_server.o 00:02:14.846 CC lib/ftl/ftl_io.o 00:02:14.846 CC lib/nvmf/vfio_user.o 00:02:14.846 CC lib/nvmf/rdma.o 00:02:14.846 CC lib/ftl/ftl_sb.o 00:02:14.846 CC lib/nvmf/auth.o 00:02:14.846 CC lib/ftl/ftl_l2p.o 00:02:14.846 CC lib/ftl/ftl_l2p_flat.o 00:02:14.846 CC lib/ftl/ftl_nv_cache.o 00:02:14.846 CC lib/ftl/ftl_band.o 00:02:14.846 CC lib/ftl/ftl_band_ops.o 00:02:14.846 CC lib/ftl/ftl_rq.o 00:02:14.846 CC lib/ftl/ftl_writer.o 00:02:14.846 CC lib/ftl/ftl_reloc.o 00:02:14.846 CC lib/ftl/ftl_l2p_cache.o 00:02:14.846 CC lib/ftl/ftl_p2l.o 00:02:14.846 CC lib/ftl/ftl_p2l_log.o 00:02:14.846 CC lib/ftl/mngt/ftl_mngt.o 00:02:14.846 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:14.846 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:14.846 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:14.846 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:14.846 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:14.846 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:14.846 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:14.846 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:14.846 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:14.846 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:14.846 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:14.846 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:14.846 CC lib/ftl/utils/ftl_conf.o 00:02:14.846 CC lib/ftl/utils/ftl_md.o 00:02:14.846 CC lib/ftl/utils/ftl_mempool.o 00:02:14.846 CC lib/ftl/utils/ftl_bitmap.o 00:02:14.846 CC lib/ftl/utils/ftl_property.o 00:02:14.846 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:14.846 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:14.846 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:14.846 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:14.846 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:14.846 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:14.846 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:14.846 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:14.846 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:14.846 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:14.846 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:14.846 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:14.846 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:14.846 CC lib/ftl/base/ftl_base_dev.o 00:02:14.846 CC lib/ftl/base/ftl_base_bdev.o 00:02:14.846 CC lib/ftl/ftl_trace.o 00:02:15.415 LIB libspdk_nbd.a 00:02:15.415 SO libspdk_nbd.so.7.0 00:02:15.415 SYMLINK libspdk_nbd.so 00:02:15.675 LIB libspdk_scsi.a 00:02:15.675 SO libspdk_scsi.so.9.0 00:02:15.675 LIB libspdk_ublk.a 00:02:15.675 SO libspdk_ublk.so.3.0 00:02:15.675 SYMLINK libspdk_scsi.so 00:02:15.675 SYMLINK libspdk_ublk.so 00:02:15.935 LIB libspdk_ftl.a 00:02:15.935 CC lib/vhost/vhost_rpc.o 00:02:15.935 CC lib/vhost/vhost.o 00:02:15.935 CC lib/vhost/vhost_scsi.o 00:02:15.935 CC lib/vhost/vhost_blk.o 00:02:15.935 CC lib/iscsi/conn.o 00:02:15.935 CC lib/iscsi/init_grp.o 00:02:15.935 CC lib/vhost/rte_vhost_user.o 00:02:15.935 CC lib/iscsi/iscsi.o 00:02:15.935 SO libspdk_ftl.so.9.0 00:02:15.935 CC lib/iscsi/param.o 00:02:15.935 CC lib/iscsi/portal_grp.o 00:02:15.935 CC lib/iscsi/tgt_node.o 00:02:15.935 CC lib/iscsi/iscsi_subsystem.o 00:02:15.935 CC lib/iscsi/iscsi_rpc.o 00:02:15.935 CC lib/iscsi/task.o 00:02:16.195 SYMLINK libspdk_ftl.so 00:02:16.765 LIB libspdk_nvmf.a 00:02:16.765 SO libspdk_nvmf.so.20.0 00:02:16.765 LIB libspdk_vhost.a 00:02:16.765 SO libspdk_vhost.so.8.0 00:02:17.024 SYMLINK libspdk_nvmf.so 00:02:17.024 SYMLINK libspdk_vhost.so 00:02:17.024 LIB libspdk_iscsi.a 00:02:17.024 SO libspdk_iscsi.so.8.0 00:02:17.284 SYMLINK libspdk_iscsi.so 00:02:17.854 CC module/vfu_device/vfu_virtio_blk.o 00:02:17.854 CC module/vfu_device/vfu_virtio.o 00:02:17.854 CC module/vfu_device/vfu_virtio_scsi.o 00:02:17.854 CC module/vfu_device/vfu_virtio_rpc.o 00:02:17.854 CC module/vfu_device/vfu_virtio_fs.o 00:02:17.854 CC module/env_dpdk/env_dpdk_rpc.o 00:02:17.854 CC module/accel/error/accel_error.o 00:02:17.854 CC module/fsdev/aio/fsdev_aio.o 00:02:17.854 CC module/accel/error/accel_error_rpc.o 00:02:17.854 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:17.854 CC module/sock/posix/posix.o 00:02:17.854 CC module/fsdev/aio/linux_aio_mgr.o 00:02:17.854 CC module/blob/bdev/blob_bdev.o 00:02:17.854 LIB libspdk_env_dpdk_rpc.a 00:02:17.854 CC module/keyring/file/keyring.o 00:02:17.854 CC module/keyring/file/keyring_rpc.o 00:02:17.854 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:17.854 CC module/accel/ioat/accel_ioat.o 00:02:17.854 CC module/scheduler/gscheduler/gscheduler.o 00:02:17.854 CC module/keyring/linux/keyring.o 00:02:17.854 CC module/accel/iaa/accel_iaa_rpc.o 00:02:17.854 CC module/accel/iaa/accel_iaa.o 00:02:17.854 CC module/accel/ioat/accel_ioat_rpc.o 00:02:17.854 CC module/keyring/linux/keyring_rpc.o 00:02:17.854 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:17.854 CC module/accel/dsa/accel_dsa.o 00:02:17.854 CC module/accel/dsa/accel_dsa_rpc.o 00:02:17.854 SO libspdk_env_dpdk_rpc.so.6.0 00:02:17.854 SYMLINK libspdk_env_dpdk_rpc.so 00:02:18.113 LIB libspdk_keyring_file.a 00:02:18.113 LIB libspdk_scheduler_dpdk_governor.a 00:02:18.113 LIB libspdk_scheduler_gscheduler.a 00:02:18.113 LIB libspdk_accel_error.a 00:02:18.113 LIB libspdk_keyring_linux.a 00:02:18.113 SO libspdk_keyring_file.so.2.0 00:02:18.113 LIB libspdk_accel_ioat.a 00:02:18.113 SO libspdk_accel_error.so.2.0 00:02:18.113 SO libspdk_keyring_linux.so.1.0 00:02:18.113 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:18.113 SO libspdk_scheduler_gscheduler.so.4.0 00:02:18.113 LIB libspdk_scheduler_dynamic.a 00:02:18.113 SO libspdk_accel_ioat.so.6.0 00:02:18.113 LIB libspdk_accel_iaa.a 00:02:18.113 SYMLINK libspdk_keyring_file.so 00:02:18.113 SYMLINK libspdk_scheduler_gscheduler.so 00:02:18.113 SO libspdk_scheduler_dynamic.so.4.0 00:02:18.113 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:18.113 SYMLINK libspdk_keyring_linux.so 00:02:18.113 SYMLINK libspdk_accel_error.so 00:02:18.113 LIB libspdk_blob_bdev.a 00:02:18.113 SO libspdk_accel_iaa.so.3.0 00:02:18.113 SYMLINK libspdk_accel_ioat.so 00:02:18.113 LIB libspdk_accel_dsa.a 00:02:18.113 SO libspdk_blob_bdev.so.11.0 00:02:18.113 SYMLINK libspdk_scheduler_dynamic.so 00:02:18.113 SYMLINK libspdk_accel_iaa.so 00:02:18.113 SO libspdk_accel_dsa.so.5.0 00:02:18.114 SYMLINK libspdk_blob_bdev.so 00:02:18.114 LIB libspdk_vfu_device.a 00:02:18.374 SYMLINK libspdk_accel_dsa.so 00:02:18.374 SO libspdk_vfu_device.so.3.0 00:02:18.374 SYMLINK libspdk_vfu_device.so 00:02:18.374 LIB libspdk_fsdev_aio.a 00:02:18.374 SO libspdk_fsdev_aio.so.1.0 00:02:18.374 LIB libspdk_sock_posix.a 00:02:18.374 SO libspdk_sock_posix.so.6.0 00:02:18.638 SYMLINK libspdk_fsdev_aio.so 00:02:18.638 SYMLINK libspdk_sock_posix.so 00:02:18.638 CC module/bdev/delay/vbdev_delay.o 00:02:18.638 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:18.638 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:18.638 CC module/bdev/passthru/vbdev_passthru.o 00:02:18.638 CC module/bdev/malloc/bdev_malloc.o 00:02:18.639 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:18.639 CC module/bdev/iscsi/bdev_iscsi.o 00:02:18.639 CC module/blobfs/bdev/blobfs_bdev.o 00:02:18.639 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:18.639 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:18.639 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:18.639 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:18.639 CC module/bdev/error/vbdev_error.o 00:02:18.639 CC module/bdev/error/vbdev_error_rpc.o 00:02:18.639 CC module/bdev/aio/bdev_aio.o 00:02:18.639 CC module/bdev/aio/bdev_aio_rpc.o 00:02:18.639 CC module/bdev/split/vbdev_split.o 00:02:18.639 CC module/bdev/null/bdev_null.o 00:02:18.639 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:18.639 CC module/bdev/ftl/bdev_ftl.o 00:02:18.639 CC module/bdev/split/vbdev_split_rpc.o 00:02:18.639 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:18.639 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:18.639 CC module/bdev/null/bdev_null_rpc.o 00:02:18.639 CC module/bdev/nvme/bdev_nvme.o 00:02:18.639 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:18.639 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:18.639 CC module/bdev/lvol/vbdev_lvol.o 00:02:18.639 CC module/bdev/nvme/nvme_rpc.o 00:02:18.639 CC module/bdev/nvme/bdev_mdns_client.o 00:02:18.639 CC module/bdev/gpt/gpt.o 00:02:18.639 CC module/bdev/gpt/vbdev_gpt.o 00:02:18.639 CC module/bdev/nvme/vbdev_opal.o 00:02:18.639 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:18.639 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:18.639 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:18.639 CC module/bdev/raid/bdev_raid.o 00:02:18.639 CC module/bdev/raid/bdev_raid_rpc.o 00:02:18.639 CC module/bdev/raid/bdev_raid_sb.o 00:02:18.639 CC module/bdev/raid/raid0.o 00:02:18.639 CC module/bdev/raid/raid1.o 00:02:18.639 CC module/bdev/raid/concat.o 00:02:18.977 LIB libspdk_blobfs_bdev.a 00:02:18.977 SO libspdk_blobfs_bdev.so.6.0 00:02:18.977 LIB libspdk_bdev_split.a 00:02:18.977 LIB libspdk_bdev_error.a 00:02:18.977 LIB libspdk_bdev_gpt.a 00:02:18.977 LIB libspdk_bdev_passthru.a 00:02:18.977 SO libspdk_bdev_split.so.6.0 00:02:18.977 LIB libspdk_bdev_null.a 00:02:18.977 LIB libspdk_bdev_zone_block.a 00:02:18.977 LIB libspdk_bdev_ftl.a 00:02:18.977 SYMLINK libspdk_blobfs_bdev.so 00:02:18.977 SO libspdk_bdev_error.so.6.0 00:02:18.977 SO libspdk_bdev_gpt.so.6.0 00:02:18.977 SO libspdk_bdev_null.so.6.0 00:02:18.977 SO libspdk_bdev_passthru.so.6.0 00:02:18.977 SO libspdk_bdev_zone_block.so.6.0 00:02:18.977 SYMLINK libspdk_bdev_split.so 00:02:18.977 SO libspdk_bdev_ftl.so.6.0 00:02:18.977 LIB libspdk_bdev_iscsi.a 00:02:19.256 LIB libspdk_bdev_delay.a 00:02:19.256 LIB libspdk_bdev_aio.a 00:02:19.256 LIB libspdk_bdev_malloc.a 00:02:19.256 SYMLINK libspdk_bdev_null.so 00:02:19.256 SO libspdk_bdev_iscsi.so.6.0 00:02:19.256 SYMLINK libspdk_bdev_gpt.so 00:02:19.256 SYMLINK libspdk_bdev_error.so 00:02:19.256 SYMLINK libspdk_bdev_passthru.so 00:02:19.256 SO libspdk_bdev_delay.so.6.0 00:02:19.256 SYMLINK libspdk_bdev_zone_block.so 00:02:19.256 SO libspdk_bdev_aio.so.6.0 00:02:19.256 SO libspdk_bdev_malloc.so.6.0 00:02:19.256 SYMLINK libspdk_bdev_ftl.so 00:02:19.256 SYMLINK libspdk_bdev_iscsi.so 00:02:19.256 SYMLINK libspdk_bdev_aio.so 00:02:19.256 SYMLINK libspdk_bdev_delay.so 00:02:19.256 SYMLINK libspdk_bdev_malloc.so 00:02:19.256 LIB libspdk_bdev_lvol.a 00:02:19.256 LIB libspdk_bdev_virtio.a 00:02:19.256 SO libspdk_bdev_lvol.so.6.0 00:02:19.256 SO libspdk_bdev_virtio.so.6.0 00:02:19.256 SYMLINK libspdk_bdev_lvol.so 00:02:19.256 SYMLINK libspdk_bdev_virtio.so 00:02:19.528 LIB libspdk_bdev_raid.a 00:02:19.528 SO libspdk_bdev_raid.so.6.0 00:02:19.528 SYMLINK libspdk_bdev_raid.so 00:02:20.518 LIB libspdk_bdev_nvme.a 00:02:20.518 SO libspdk_bdev_nvme.so.7.1 00:02:20.778 SYMLINK libspdk_bdev_nvme.so 00:02:21.347 CC module/event/subsystems/iobuf/iobuf.o 00:02:21.347 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:21.347 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:21.347 CC module/event/subsystems/keyring/keyring.o 00:02:21.347 CC module/event/subsystems/vmd/vmd.o 00:02:21.347 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:21.347 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:21.347 CC module/event/subsystems/sock/sock.o 00:02:21.347 CC module/event/subsystems/fsdev/fsdev.o 00:02:21.347 CC module/event/subsystems/scheduler/scheduler.o 00:02:21.347 LIB libspdk_event_vhost_blk.a 00:02:21.347 LIB libspdk_event_vfu_tgt.a 00:02:21.347 LIB libspdk_event_keyring.a 00:02:21.347 LIB libspdk_event_fsdev.a 00:02:21.347 LIB libspdk_event_sock.a 00:02:21.347 LIB libspdk_event_iobuf.a 00:02:21.347 SO libspdk_event_vhost_blk.so.3.0 00:02:21.347 LIB libspdk_event_scheduler.a 00:02:21.606 LIB libspdk_event_vmd.a 00:02:21.606 SO libspdk_event_vfu_tgt.so.3.0 00:02:21.606 SO libspdk_event_keyring.so.1.0 00:02:21.606 SO libspdk_event_sock.so.5.0 00:02:21.606 SO libspdk_event_fsdev.so.1.0 00:02:21.606 SO libspdk_event_iobuf.so.3.0 00:02:21.606 SO libspdk_event_scheduler.so.4.0 00:02:21.606 SO libspdk_event_vmd.so.6.0 00:02:21.606 SYMLINK libspdk_event_vhost_blk.so 00:02:21.606 SYMLINK libspdk_event_vfu_tgt.so 00:02:21.606 SYMLINK libspdk_event_keyring.so 00:02:21.606 SYMLINK libspdk_event_sock.so 00:02:21.606 SYMLINK libspdk_event_scheduler.so 00:02:21.607 SYMLINK libspdk_event_fsdev.so 00:02:21.607 SYMLINK libspdk_event_iobuf.so 00:02:21.607 SYMLINK libspdk_event_vmd.so 00:02:21.866 CC module/event/subsystems/accel/accel.o 00:02:22.125 LIB libspdk_event_accel.a 00:02:22.125 SO libspdk_event_accel.so.6.0 00:02:22.125 SYMLINK libspdk_event_accel.so 00:02:22.385 CC module/event/subsystems/bdev/bdev.o 00:02:22.644 LIB libspdk_event_bdev.a 00:02:22.644 SO libspdk_event_bdev.so.6.0 00:02:22.644 SYMLINK libspdk_event_bdev.so 00:02:22.904 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:22.904 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:22.904 CC module/event/subsystems/scsi/scsi.o 00:02:22.904 CC module/event/subsystems/ublk/ublk.o 00:02:22.904 CC module/event/subsystems/nbd/nbd.o 00:02:23.164 LIB libspdk_event_ublk.a 00:02:23.164 LIB libspdk_event_nbd.a 00:02:23.164 LIB libspdk_event_scsi.a 00:02:23.164 SO libspdk_event_ublk.so.3.0 00:02:23.164 SO libspdk_event_nbd.so.6.0 00:02:23.164 SO libspdk_event_scsi.so.6.0 00:02:23.164 LIB libspdk_event_nvmf.a 00:02:23.164 SYMLINK libspdk_event_ublk.so 00:02:23.164 SYMLINK libspdk_event_nbd.so 00:02:23.164 SO libspdk_event_nvmf.so.6.0 00:02:23.164 SYMLINK libspdk_event_scsi.so 00:02:23.423 SYMLINK libspdk_event_nvmf.so 00:02:23.682 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:23.682 CC module/event/subsystems/iscsi/iscsi.o 00:02:23.682 LIB libspdk_event_vhost_scsi.a 00:02:23.682 LIB libspdk_event_iscsi.a 00:02:23.682 SO libspdk_event_vhost_scsi.so.3.0 00:02:23.682 SO libspdk_event_iscsi.so.6.0 00:02:23.682 SYMLINK libspdk_event_vhost_scsi.so 00:02:23.942 SYMLINK libspdk_event_iscsi.so 00:02:23.942 SO libspdk.so.6.0 00:02:23.942 SYMLINK libspdk.so 00:02:24.521 CXX app/trace/trace.o 00:02:24.521 CC app/spdk_top/spdk_top.o 00:02:24.521 CC app/spdk_nvme_perf/perf.o 00:02:24.521 CC app/trace_record/trace_record.o 00:02:24.521 CC app/spdk_lspci/spdk_lspci.o 00:02:24.521 CC app/spdk_nvme_identify/identify.o 00:02:24.521 CC app/spdk_nvme_discover/discovery_aer.o 00:02:24.521 TEST_HEADER include/spdk/accel.h 00:02:24.521 TEST_HEADER include/spdk/assert.h 00:02:24.521 TEST_HEADER include/spdk/accel_module.h 00:02:24.521 CC test/rpc_client/rpc_client_test.o 00:02:24.521 TEST_HEADER include/spdk/barrier.h 00:02:24.521 TEST_HEADER include/spdk/base64.h 00:02:24.521 TEST_HEADER include/spdk/bdev_module.h 00:02:24.521 TEST_HEADER include/spdk/bdev.h 00:02:24.521 TEST_HEADER include/spdk/bdev_zone.h 00:02:24.521 TEST_HEADER include/spdk/bit_array.h 00:02:24.521 TEST_HEADER include/spdk/bit_pool.h 00:02:24.521 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:24.521 TEST_HEADER include/spdk/blob_bdev.h 00:02:24.521 TEST_HEADER include/spdk/blob.h 00:02:24.521 TEST_HEADER include/spdk/blobfs.h 00:02:24.521 TEST_HEADER include/spdk/config.h 00:02:24.521 TEST_HEADER include/spdk/conf.h 00:02:24.521 TEST_HEADER include/spdk/crc16.h 00:02:24.521 TEST_HEADER include/spdk/cpuset.h 00:02:24.521 TEST_HEADER include/spdk/crc32.h 00:02:24.521 TEST_HEADER include/spdk/dif.h 00:02:24.521 TEST_HEADER include/spdk/crc64.h 00:02:24.521 TEST_HEADER include/spdk/dma.h 00:02:24.521 TEST_HEADER include/spdk/env_dpdk.h 00:02:24.521 TEST_HEADER include/spdk/endian.h 00:02:24.521 TEST_HEADER include/spdk/env.h 00:02:24.521 TEST_HEADER include/spdk/event.h 00:02:24.521 TEST_HEADER include/spdk/fd_group.h 00:02:24.521 TEST_HEADER include/spdk/fd.h 00:02:24.521 TEST_HEADER include/spdk/file.h 00:02:24.521 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:24.521 TEST_HEADER include/spdk/fsdev.h 00:02:24.521 TEST_HEADER include/spdk/fsdev_module.h 00:02:24.521 TEST_HEADER include/spdk/ftl.h 00:02:24.521 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:24.521 TEST_HEADER include/spdk/gpt_spec.h 00:02:24.521 TEST_HEADER include/spdk/hexlify.h 00:02:24.521 TEST_HEADER include/spdk/idxd_spec.h 00:02:24.521 TEST_HEADER include/spdk/idxd.h 00:02:24.521 TEST_HEADER include/spdk/histogram_data.h 00:02:24.521 TEST_HEADER include/spdk/init.h 00:02:24.521 TEST_HEADER include/spdk/ioat.h 00:02:24.521 TEST_HEADER include/spdk/iscsi_spec.h 00:02:24.521 TEST_HEADER include/spdk/ioat_spec.h 00:02:24.521 TEST_HEADER include/spdk/json.h 00:02:24.521 TEST_HEADER include/spdk/jsonrpc.h 00:02:24.521 TEST_HEADER include/spdk/keyring_module.h 00:02:24.521 TEST_HEADER include/spdk/keyring.h 00:02:24.521 TEST_HEADER include/spdk/log.h 00:02:24.521 TEST_HEADER include/spdk/likely.h 00:02:24.521 CC app/nvmf_tgt/nvmf_main.o 00:02:24.521 TEST_HEADER include/spdk/md5.h 00:02:24.521 TEST_HEADER include/spdk/mmio.h 00:02:24.521 TEST_HEADER include/spdk/memory.h 00:02:24.521 TEST_HEADER include/spdk/lvol.h 00:02:24.521 TEST_HEADER include/spdk/nbd.h 00:02:24.521 TEST_HEADER include/spdk/notify.h 00:02:24.521 TEST_HEADER include/spdk/net.h 00:02:24.521 TEST_HEADER include/spdk/nvme_intel.h 00:02:24.521 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:24.521 TEST_HEADER include/spdk/nvme.h 00:02:24.521 TEST_HEADER include/spdk/nvme_spec.h 00:02:24.521 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:24.521 TEST_HEADER include/spdk/nvme_zns.h 00:02:24.521 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:24.521 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:24.521 TEST_HEADER include/spdk/nvmf.h 00:02:24.521 TEST_HEADER include/spdk/opal.h 00:02:24.521 TEST_HEADER include/spdk/nvmf_transport.h 00:02:24.521 TEST_HEADER include/spdk/opal_spec.h 00:02:24.521 TEST_HEADER include/spdk/pci_ids.h 00:02:24.521 TEST_HEADER include/spdk/nvmf_spec.h 00:02:24.521 TEST_HEADER include/spdk/queue.h 00:02:24.521 TEST_HEADER include/spdk/reduce.h 00:02:24.521 TEST_HEADER include/spdk/rpc.h 00:02:24.521 TEST_HEADER include/spdk/pipe.h 00:02:24.521 TEST_HEADER include/spdk/scsi.h 00:02:24.521 TEST_HEADER include/spdk/scheduler.h 00:02:24.521 TEST_HEADER include/spdk/scsi_spec.h 00:02:24.521 TEST_HEADER include/spdk/sock.h 00:02:24.521 TEST_HEADER include/spdk/stdinc.h 00:02:24.521 TEST_HEADER include/spdk/string.h 00:02:24.521 TEST_HEADER include/spdk/thread.h 00:02:24.521 CC app/iscsi_tgt/iscsi_tgt.o 00:02:24.521 TEST_HEADER include/spdk/trace.h 00:02:24.521 TEST_HEADER include/spdk/trace_parser.h 00:02:24.521 TEST_HEADER include/spdk/tree.h 00:02:24.521 TEST_HEADER include/spdk/ublk.h 00:02:24.521 CC app/spdk_dd/spdk_dd.o 00:02:24.522 TEST_HEADER include/spdk/uuid.h 00:02:24.522 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:24.522 TEST_HEADER include/spdk/util.h 00:02:24.522 TEST_HEADER include/spdk/version.h 00:02:24.522 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:24.522 TEST_HEADER include/spdk/vhost.h 00:02:24.522 TEST_HEADER include/spdk/vmd.h 00:02:24.522 TEST_HEADER include/spdk/xor.h 00:02:24.522 TEST_HEADER include/spdk/zipf.h 00:02:24.522 CXX test/cpp_headers/accel.o 00:02:24.522 CXX test/cpp_headers/accel_module.o 00:02:24.522 CXX test/cpp_headers/assert.o 00:02:24.522 CXX test/cpp_headers/barrier.o 00:02:24.522 CXX test/cpp_headers/bdev_zone.o 00:02:24.522 CXX test/cpp_headers/bdev_module.o 00:02:24.522 CXX test/cpp_headers/base64.o 00:02:24.522 CXX test/cpp_headers/bdev.o 00:02:24.522 CXX test/cpp_headers/bit_array.o 00:02:24.522 CXX test/cpp_headers/blob_bdev.o 00:02:24.522 CXX test/cpp_headers/blobfs.o 00:02:24.522 CXX test/cpp_headers/bit_pool.o 00:02:24.522 CXX test/cpp_headers/blob.o 00:02:24.522 CXX test/cpp_headers/conf.o 00:02:24.522 CXX test/cpp_headers/blobfs_bdev.o 00:02:24.522 CXX test/cpp_headers/crc16.o 00:02:24.522 CXX test/cpp_headers/config.o 00:02:24.522 CXX test/cpp_headers/cpuset.o 00:02:24.522 CXX test/cpp_headers/crc32.o 00:02:24.522 CXX test/cpp_headers/crc64.o 00:02:24.522 CXX test/cpp_headers/dif.o 00:02:24.522 CXX test/cpp_headers/env_dpdk.o 00:02:24.522 CXX test/cpp_headers/endian.o 00:02:24.522 CXX test/cpp_headers/env.o 00:02:24.522 CXX test/cpp_headers/dma.o 00:02:24.522 CXX test/cpp_headers/fd.o 00:02:24.522 CXX test/cpp_headers/event.o 00:02:24.522 CXX test/cpp_headers/fd_group.o 00:02:24.522 CXX test/cpp_headers/fsdev.o 00:02:24.522 CXX test/cpp_headers/file.o 00:02:24.522 CXX test/cpp_headers/fsdev_module.o 00:02:24.522 CXX test/cpp_headers/ftl.o 00:02:24.522 CXX test/cpp_headers/fuse_dispatcher.o 00:02:24.522 CXX test/cpp_headers/hexlify.o 00:02:24.522 CXX test/cpp_headers/histogram_data.o 00:02:24.522 CXX test/cpp_headers/gpt_spec.o 00:02:24.522 CXX test/cpp_headers/idxd.o 00:02:24.522 CXX test/cpp_headers/idxd_spec.o 00:02:24.522 CXX test/cpp_headers/ioat.o 00:02:24.522 CXX test/cpp_headers/ioat_spec.o 00:02:24.522 CXX test/cpp_headers/init.o 00:02:24.522 CXX test/cpp_headers/iscsi_spec.o 00:02:24.522 CXX test/cpp_headers/jsonrpc.o 00:02:24.522 CXX test/cpp_headers/json.o 00:02:24.522 CXX test/cpp_headers/keyring.o 00:02:24.522 CXX test/cpp_headers/likely.o 00:02:24.522 CXX test/cpp_headers/keyring_module.o 00:02:24.522 CC app/spdk_tgt/spdk_tgt.o 00:02:24.522 CXX test/cpp_headers/md5.o 00:02:24.522 CXX test/cpp_headers/log.o 00:02:24.522 CXX test/cpp_headers/lvol.o 00:02:24.522 CXX test/cpp_headers/mmio.o 00:02:24.522 CXX test/cpp_headers/memory.o 00:02:24.522 CXX test/cpp_headers/net.o 00:02:24.522 CXX test/cpp_headers/nvme.o 00:02:24.522 CXX test/cpp_headers/nbd.o 00:02:24.522 CXX test/cpp_headers/notify.o 00:02:24.522 CXX test/cpp_headers/nvme_intel.o 00:02:24.522 CXX test/cpp_headers/nvme_ocssd.o 00:02:24.522 CC examples/ioat/verify/verify.o 00:02:24.522 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:24.522 CXX test/cpp_headers/nvme_zns.o 00:02:24.522 CXX test/cpp_headers/nvme_spec.o 00:02:24.522 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:24.522 CXX test/cpp_headers/nvmf.o 00:02:24.522 CXX test/cpp_headers/nvmf_cmd.o 00:02:24.522 CXX test/cpp_headers/nvmf_spec.o 00:02:24.522 CXX test/cpp_headers/nvmf_transport.o 00:02:24.522 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:24.522 CC test/thread/poller_perf/poller_perf.o 00:02:24.522 CC examples/ioat/perf/perf.o 00:02:24.522 CXX test/cpp_headers/opal.o 00:02:24.522 CC test/app/histogram_perf/histogram_perf.o 00:02:24.522 CC examples/util/zipf/zipf.o 00:02:24.522 CC test/env/vtophys/vtophys.o 00:02:24.522 CC test/env/pci/pci_ut.o 00:02:24.522 CC test/env/memory/memory_ut.o 00:02:24.522 CXX test/cpp_headers/opal_spec.o 00:02:24.522 CC test/app/stub/stub.o 00:02:24.522 CC test/app/jsoncat/jsoncat.o 00:02:24.522 CC test/app/bdev_svc/bdev_svc.o 00:02:24.522 CC app/fio/bdev/fio_plugin.o 00:02:24.522 CC app/fio/nvme/fio_plugin.o 00:02:24.793 CC test/dma/test_dma/test_dma.o 00:02:24.793 LINK spdk_lspci 00:02:25.058 LINK rpc_client_test 00:02:25.058 LINK spdk_nvme_discover 00:02:25.058 CC test/env/mem_callbacks/mem_callbacks.o 00:02:25.058 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:25.058 LINK interrupt_tgt 00:02:25.058 LINK spdk_trace_record 00:02:25.058 LINK env_dpdk_post_init 00:02:25.058 LINK nvmf_tgt 00:02:25.058 LINK iscsi_tgt 00:02:25.058 CXX test/cpp_headers/pci_ids.o 00:02:25.058 CXX test/cpp_headers/pipe.o 00:02:25.058 LINK verify 00:02:25.058 CXX test/cpp_headers/queue.o 00:02:25.058 CXX test/cpp_headers/reduce.o 00:02:25.058 LINK stub 00:02:25.058 CXX test/cpp_headers/scsi.o 00:02:25.058 CXX test/cpp_headers/scsi_spec.o 00:02:25.058 CXX test/cpp_headers/rpc.o 00:02:25.058 CXX test/cpp_headers/scheduler.o 00:02:25.058 CXX test/cpp_headers/sock.o 00:02:25.058 CXX test/cpp_headers/stdinc.o 00:02:25.058 CXX test/cpp_headers/thread.o 00:02:25.058 CXX test/cpp_headers/string.o 00:02:25.058 CXX test/cpp_headers/trace.o 00:02:25.058 CXX test/cpp_headers/trace_parser.o 00:02:25.058 LINK poller_perf 00:02:25.058 CXX test/cpp_headers/tree.o 00:02:25.058 CXX test/cpp_headers/ublk.o 00:02:25.058 CXX test/cpp_headers/util.o 00:02:25.058 LINK ioat_perf 00:02:25.058 LINK vtophys 00:02:25.058 CXX test/cpp_headers/uuid.o 00:02:25.058 CXX test/cpp_headers/version.o 00:02:25.058 CXX test/cpp_headers/vfio_user_pci.o 00:02:25.059 LINK histogram_perf 00:02:25.059 CXX test/cpp_headers/vfio_user_spec.o 00:02:25.059 CXX test/cpp_headers/vhost.o 00:02:25.059 CXX test/cpp_headers/vmd.o 00:02:25.059 CXX test/cpp_headers/xor.o 00:02:25.059 CXX test/cpp_headers/zipf.o 00:02:25.059 LINK zipf 00:02:25.059 LINK jsoncat 00:02:25.318 LINK spdk_tgt 00:02:25.318 LINK bdev_svc 00:02:25.318 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:25.318 LINK spdk_dd 00:02:25.318 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:25.318 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:25.318 LINK pci_ut 00:02:25.318 LINK spdk_trace 00:02:25.576 LINK spdk_bdev 00:02:25.576 LINK nvme_fuzz 00:02:25.576 LINK spdk_nvme 00:02:25.576 LINK test_dma 00:02:25.576 CC examples/idxd/perf/perf.o 00:02:25.576 CC examples/sock/hello_world/hello_sock.o 00:02:25.576 LINK spdk_nvme_identify 00:02:25.576 CC test/event/reactor/reactor.o 00:02:25.576 CC test/event/event_perf/event_perf.o 00:02:25.576 CC test/event/reactor_perf/reactor_perf.o 00:02:25.576 CC examples/thread/thread/thread_ex.o 00:02:25.576 CC examples/vmd/led/led.o 00:02:25.576 CC examples/vmd/lsvmd/lsvmd.o 00:02:25.576 LINK spdk_nvme_perf 00:02:25.576 LINK mem_callbacks 00:02:25.576 LINK vhost_fuzz 00:02:25.576 CC test/event/app_repeat/app_repeat.o 00:02:25.836 CC test/event/scheduler/scheduler.o 00:02:25.836 LINK spdk_top 00:02:25.836 CC app/vhost/vhost.o 00:02:25.836 LINK event_perf 00:02:25.836 LINK reactor 00:02:25.836 LINK reactor_perf 00:02:25.836 LINK lsvmd 00:02:25.836 LINK led 00:02:25.836 LINK hello_sock 00:02:25.836 LINK app_repeat 00:02:25.836 LINK thread 00:02:25.836 LINK idxd_perf 00:02:25.836 LINK vhost 00:02:25.836 LINK scheduler 00:02:26.095 LINK memory_ut 00:02:26.095 CC test/nvme/startup/startup.o 00:02:26.095 CC test/nvme/compliance/nvme_compliance.o 00:02:26.095 CC test/nvme/sgl/sgl.o 00:02:26.095 CC test/nvme/reset/reset.o 00:02:26.095 CC test/nvme/e2edp/nvme_dp.o 00:02:26.095 CC test/nvme/reserve/reserve.o 00:02:26.095 CC test/nvme/aer/aer.o 00:02:26.095 CC test/nvme/simple_copy/simple_copy.o 00:02:26.095 CC test/nvme/err_injection/err_injection.o 00:02:26.095 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:26.095 CC test/nvme/overhead/overhead.o 00:02:26.095 CC test/nvme/boot_partition/boot_partition.o 00:02:26.095 CC test/nvme/connect_stress/connect_stress.o 00:02:26.095 CC test/nvme/fused_ordering/fused_ordering.o 00:02:26.095 CC test/nvme/cuse/cuse.o 00:02:26.095 CC test/nvme/fdp/fdp.o 00:02:26.095 CC test/blobfs/mkfs/mkfs.o 00:02:26.095 CC test/accel/dif/dif.o 00:02:26.354 CC test/lvol/esnap/esnap.o 00:02:26.354 CC examples/nvme/hello_world/hello_world.o 00:02:26.354 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:26.354 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:26.354 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:26.354 CC examples/nvme/hotplug/hotplug.o 00:02:26.354 CC examples/nvme/arbitration/arbitration.o 00:02:26.354 CC examples/nvme/reconnect/reconnect.o 00:02:26.354 LINK startup 00:02:26.354 CC examples/nvme/abort/abort.o 00:02:26.354 LINK boot_partition 00:02:26.354 LINK doorbell_aers 00:02:26.354 LINK reserve 00:02:26.354 LINK connect_stress 00:02:26.354 LINK fused_ordering 00:02:26.354 LINK err_injection 00:02:26.354 LINK simple_copy 00:02:26.354 LINK reset 00:02:26.354 LINK mkfs 00:02:26.354 LINK nvme_dp 00:02:26.354 LINK sgl 00:02:26.354 LINK nvme_compliance 00:02:26.354 LINK aer 00:02:26.354 CC examples/accel/perf/accel_perf.o 00:02:26.354 LINK overhead 00:02:26.354 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:26.354 CC examples/blob/cli/blobcli.o 00:02:26.354 CC examples/blob/hello_world/hello_blob.o 00:02:26.354 LINK fdp 00:02:26.354 LINK cmb_copy 00:02:26.354 LINK pmr_persistence 00:02:26.612 LINK hello_world 00:02:26.612 LINK hotplug 00:02:26.612 LINK arbitration 00:02:26.612 LINK reconnect 00:02:26.612 LINK abort 00:02:26.612 LINK iscsi_fuzz 00:02:26.612 LINK hello_blob 00:02:26.612 LINK nvme_manage 00:02:26.612 LINK hello_fsdev 00:02:26.612 LINK dif 00:02:26.870 LINK accel_perf 00:02:26.870 LINK blobcli 00:02:27.128 LINK cuse 00:02:27.128 CC test/bdev/bdevio/bdevio.o 00:02:27.386 CC examples/bdev/hello_world/hello_bdev.o 00:02:27.386 CC examples/bdev/bdevperf/bdevperf.o 00:02:27.644 LINK hello_bdev 00:02:27.644 LINK bdevio 00:02:27.904 LINK bdevperf 00:02:28.473 CC examples/nvmf/nvmf/nvmf.o 00:02:28.734 LINK nvmf 00:02:29.672 LINK esnap 00:02:29.931 00:02:29.931 real 0m55.420s 00:02:29.931 user 8m1.254s 00:02:29.931 sys 3m41.967s 00:02:29.931 14:12:19 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:29.931 14:12:19 make -- common/autotest_common.sh@10 -- $ set +x 00:02:29.931 ************************************ 00:02:29.931 END TEST make 00:02:29.931 ************************************ 00:02:29.931 14:12:19 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:29.931 14:12:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:29.931 14:12:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:29.931 14:12:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.931 14:12:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:29.931 14:12:19 -- pm/common@44 -- $ pid=1186794 00:02:29.931 14:12:19 -- pm/common@50 -- $ kill -TERM 1186794 00:02:29.931 14:12:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.931 14:12:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:29.931 14:12:19 -- pm/common@44 -- $ pid=1186796 00:02:29.931 14:12:19 -- pm/common@50 -- $ kill -TERM 1186796 00:02:29.931 14:12:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.931 14:12:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:29.931 14:12:19 -- pm/common@44 -- $ pid=1186797 00:02:29.931 14:12:19 -- pm/common@50 -- $ kill -TERM 1186797 00:02:29.931 14:12:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.931 14:12:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:29.931 14:12:19 -- pm/common@44 -- $ pid=1186824 00:02:29.931 14:12:19 -- pm/common@50 -- $ sudo -E kill -TERM 1186824 00:02:30.191 14:12:19 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:30.192 14:12:19 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:30.192 14:12:19 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:30.192 14:12:19 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:30.192 14:12:19 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:30.192 14:12:19 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:30.192 14:12:19 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:30.192 14:12:19 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:30.192 14:12:19 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:30.192 14:12:19 -- scripts/common.sh@336 -- # IFS=.-: 00:02:30.192 14:12:19 -- scripts/common.sh@336 -- # read -ra ver1 00:02:30.192 14:12:19 -- scripts/common.sh@337 -- # IFS=.-: 00:02:30.192 14:12:19 -- scripts/common.sh@337 -- # read -ra ver2 00:02:30.192 14:12:19 -- scripts/common.sh@338 -- # local 'op=<' 00:02:30.192 14:12:19 -- scripts/common.sh@340 -- # ver1_l=2 00:02:30.192 14:12:19 -- scripts/common.sh@341 -- # ver2_l=1 00:02:30.192 14:12:19 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:30.192 14:12:19 -- scripts/common.sh@344 -- # case "$op" in 00:02:30.192 14:12:19 -- scripts/common.sh@345 -- # : 1 00:02:30.192 14:12:19 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:30.192 14:12:19 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:30.192 14:12:19 -- scripts/common.sh@365 -- # decimal 1 00:02:30.192 14:12:19 -- scripts/common.sh@353 -- # local d=1 00:02:30.192 14:12:19 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:30.192 14:12:19 -- scripts/common.sh@355 -- # echo 1 00:02:30.192 14:12:19 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:30.192 14:12:19 -- scripts/common.sh@366 -- # decimal 2 00:02:30.192 14:12:19 -- scripts/common.sh@353 -- # local d=2 00:02:30.192 14:12:19 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:30.192 14:12:19 -- scripts/common.sh@355 -- # echo 2 00:02:30.193 14:12:19 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:30.193 14:12:19 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:30.193 14:12:19 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:30.193 14:12:19 -- scripts/common.sh@368 -- # return 0 00:02:30.193 14:12:19 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:30.193 14:12:19 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:30.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:30.193 --rc genhtml_branch_coverage=1 00:02:30.193 --rc genhtml_function_coverage=1 00:02:30.193 --rc genhtml_legend=1 00:02:30.193 --rc geninfo_all_blocks=1 00:02:30.193 --rc geninfo_unexecuted_blocks=1 00:02:30.193 00:02:30.193 ' 00:02:30.193 14:12:19 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:30.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:30.193 --rc genhtml_branch_coverage=1 00:02:30.193 --rc genhtml_function_coverage=1 00:02:30.193 --rc genhtml_legend=1 00:02:30.193 --rc geninfo_all_blocks=1 00:02:30.193 --rc geninfo_unexecuted_blocks=1 00:02:30.193 00:02:30.193 ' 00:02:30.193 14:12:19 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:30.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:30.193 --rc genhtml_branch_coverage=1 00:02:30.193 --rc genhtml_function_coverage=1 00:02:30.193 --rc genhtml_legend=1 00:02:30.193 --rc geninfo_all_blocks=1 00:02:30.193 --rc geninfo_unexecuted_blocks=1 00:02:30.193 00:02:30.193 ' 00:02:30.193 14:12:19 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:30.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:30.193 --rc genhtml_branch_coverage=1 00:02:30.193 --rc genhtml_function_coverage=1 00:02:30.193 --rc genhtml_legend=1 00:02:30.193 --rc geninfo_all_blocks=1 00:02:30.193 --rc geninfo_unexecuted_blocks=1 00:02:30.193 00:02:30.193 ' 00:02:30.193 14:12:19 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:30.193 14:12:19 -- nvmf/common.sh@7 -- # uname -s 00:02:30.193 14:12:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:30.193 14:12:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:30.193 14:12:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:30.193 14:12:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:30.193 14:12:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:30.194 14:12:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:30.194 14:12:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:30.194 14:12:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:30.194 14:12:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:30.194 14:12:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:30.194 14:12:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:30.194 14:12:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:30.194 14:12:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:30.194 14:12:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:30.194 14:12:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:30.194 14:12:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:30.194 14:12:19 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:30.194 14:12:19 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:30.194 14:12:19 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:30.194 14:12:19 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:30.194 14:12:19 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:30.194 14:12:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.194 14:12:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.194 14:12:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.194 14:12:19 -- paths/export.sh@5 -- # export PATH 00:02:30.194 14:12:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.194 14:12:19 -- nvmf/common.sh@51 -- # : 0 00:02:30.194 14:12:19 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:30.194 14:12:19 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:30.194 14:12:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:30.195 14:12:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:30.195 14:12:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:30.195 14:12:19 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:30.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:30.195 14:12:19 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:30.195 14:12:19 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:30.195 14:12:19 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:30.195 14:12:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:30.195 14:12:19 -- spdk/autotest.sh@32 -- # uname -s 00:02:30.195 14:12:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:30.195 14:12:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:30.195 14:12:19 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:30.195 14:12:19 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:30.195 14:12:19 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:30.195 14:12:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:30.195 14:12:19 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:30.195 14:12:19 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:30.195 14:12:19 -- spdk/autotest.sh@48 -- # udevadm_pid=1249570 00:02:30.195 14:12:19 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:30.195 14:12:19 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:30.195 14:12:19 -- pm/common@17 -- # local monitor 00:02:30.195 14:12:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.195 14:12:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.195 14:12:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.195 14:12:19 -- pm/common@21 -- # date +%s 00:02:30.196 14:12:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.196 14:12:19 -- pm/common@21 -- # date +%s 00:02:30.196 14:12:19 -- pm/common@25 -- # sleep 1 00:02:30.196 14:12:19 -- pm/common@21 -- # date +%s 00:02:30.196 14:12:19 -- pm/common@21 -- # date +%s 00:02:30.196 14:12:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731849139 00:02:30.196 14:12:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731849139 00:02:30.196 14:12:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731849139 00:02:30.196 14:12:19 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731849139 00:02:30.457 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731849139_collect-vmstat.pm.log 00:02:30.457 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731849139_collect-cpu-load.pm.log 00:02:30.457 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731849139_collect-cpu-temp.pm.log 00:02:30.457 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731849139_collect-bmc-pm.bmc.pm.log 00:02:31.395 14:12:20 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:31.395 14:12:20 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:31.395 14:12:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:31.395 14:12:20 -- common/autotest_common.sh@10 -- # set +x 00:02:31.395 14:12:20 -- spdk/autotest.sh@59 -- # create_test_list 00:02:31.395 14:12:20 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:31.395 14:12:20 -- common/autotest_common.sh@10 -- # set +x 00:02:31.396 14:12:20 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:31.396 14:12:20 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:31.396 14:12:20 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:31.396 14:12:20 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:31.396 14:12:20 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:31.396 14:12:20 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:31.396 14:12:20 -- common/autotest_common.sh@1457 -- # uname 00:02:31.396 14:12:20 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:31.396 14:12:20 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:31.396 14:12:20 -- common/autotest_common.sh@1477 -- # uname 00:02:31.396 14:12:20 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:31.396 14:12:20 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:31.396 14:12:20 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:31.396 lcov: LCOV version 1.15 00:02:31.396 14:12:20 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:49.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:49.494 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:57.617 14:12:45 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:57.617 14:12:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:57.617 14:12:45 -- common/autotest_common.sh@10 -- # set +x 00:02:57.617 14:12:45 -- spdk/autotest.sh@78 -- # rm -f 00:02:57.617 14:12:45 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:58.997 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:58.997 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:58.997 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:58.997 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:59.256 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:59.256 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:59.256 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:59.256 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:59.257 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:59.257 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:59.257 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:59.257 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:59.257 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:59.257 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:59.257 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:59.257 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:59.516 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:59.516 14:12:48 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:59.516 14:12:48 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:59.516 14:12:48 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:59.516 14:12:48 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:02:59.516 14:12:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:02:59.516 14:12:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:02:59.516 14:12:48 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:59.516 14:12:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:59.516 14:12:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:59.516 14:12:48 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:59.516 14:12:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:59.516 14:12:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:59.516 14:12:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:59.516 14:12:48 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:59.516 14:12:48 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:59.516 No valid GPT data, bailing 00:02:59.516 14:12:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:59.516 14:12:48 -- scripts/common.sh@394 -- # pt= 00:02:59.516 14:12:48 -- scripts/common.sh@395 -- # return 1 00:02:59.516 14:12:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:59.516 1+0 records in 00:02:59.516 1+0 records out 00:02:59.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00138395 s, 758 MB/s 00:02:59.516 14:12:48 -- spdk/autotest.sh@105 -- # sync 00:02:59.516 14:12:48 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:59.516 14:12:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:59.516 14:12:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:06.109 14:12:54 -- spdk/autotest.sh@111 -- # uname -s 00:03:06.109 14:12:54 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:06.109 14:12:54 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:06.109 14:12:54 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:08.015 Hugepages 00:03:08.015 node hugesize free / total 00:03:08.015 node0 1048576kB 0 / 0 00:03:08.015 node0 2048kB 1024 / 1024 00:03:08.015 node1 1048576kB 0 / 0 00:03:08.015 node1 2048kB 1024 / 1024 00:03:08.015 00:03:08.015 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:08.015 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:08.015 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:08.015 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:08.015 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:08.015 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:08.015 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:08.015 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:08.015 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:08.015 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:08.015 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:08.015 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:08.015 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:08.015 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:08.015 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:08.015 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:08.015 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:08.015 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:08.015 14:12:57 -- spdk/autotest.sh@117 -- # uname -s 00:03:08.015 14:12:57 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:08.015 14:12:57 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:08.015 14:12:57 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:11.304 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:11.304 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:11.304 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:11.304 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:11.304 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:11.304 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:11.304 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:11.304 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:11.304 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:11.304 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:11.304 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:11.304 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:11.304 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:11.304 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:11.304 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:11.305 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:11.872 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:11.872 14:13:01 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:13.252 14:13:02 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:13.252 14:13:02 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:13.252 14:13:02 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:13.252 14:13:02 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:13.252 14:13:02 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:13.252 14:13:02 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:13.252 14:13:02 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:13.252 14:13:02 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:13.252 14:13:02 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:13.252 14:13:02 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:13.252 14:13:02 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:13.252 14:13:02 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:15.789 Waiting for block devices as requested 00:03:15.789 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:16.047 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:16.047 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:16.047 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:16.306 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:16.306 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:16.306 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:16.306 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:16.565 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:16.565 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:16.565 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:16.824 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:16.824 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:16.824 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:17.083 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:17.083 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:17.083 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:17.083 14:13:06 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:17.083 14:13:06 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:17.083 14:13:06 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:17.083 14:13:06 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:17.083 14:13:06 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:17.083 14:13:06 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:17.342 14:13:06 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:17.342 14:13:06 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:17.342 14:13:06 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:17.342 14:13:06 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:17.342 14:13:06 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:17.342 14:13:06 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:17.342 14:13:06 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:17.342 14:13:06 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:17.342 14:13:06 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:17.342 14:13:06 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:17.342 14:13:06 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:17.342 14:13:06 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:17.342 14:13:06 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:17.342 14:13:06 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:17.342 14:13:06 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:17.342 14:13:06 -- common/autotest_common.sh@1543 -- # continue 00:03:17.342 14:13:06 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:17.342 14:13:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:17.342 14:13:06 -- common/autotest_common.sh@10 -- # set +x 00:03:17.342 14:13:06 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:17.342 14:13:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:17.342 14:13:06 -- common/autotest_common.sh@10 -- # set +x 00:03:17.342 14:13:06 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.633 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:20.633 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:20.633 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:20.633 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:20.633 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:20.633 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:20.633 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:20.633 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:20.633 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:20.633 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:20.633 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:20.633 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:20.633 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:20.633 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:20.633 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:20.633 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:21.202 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:21.202 14:13:10 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:21.202 14:13:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:21.202 14:13:10 -- common/autotest_common.sh@10 -- # set +x 00:03:21.202 14:13:10 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:21.202 14:13:10 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:21.202 14:13:10 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:21.202 14:13:10 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:21.202 14:13:10 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:21.202 14:13:10 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:21.202 14:13:10 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:21.202 14:13:10 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:21.202 14:13:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:21.202 14:13:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:21.202 14:13:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:21.202 14:13:10 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:21.202 14:13:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:21.202 14:13:10 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:21.202 14:13:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:21.202 14:13:10 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:21.202 14:13:10 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:21.202 14:13:10 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:21.202 14:13:10 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:21.202 14:13:10 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:21.202 14:13:10 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:21.202 14:13:10 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:21.202 14:13:10 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:21.202 14:13:10 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1263809 00:03:21.202 14:13:10 -- common/autotest_common.sh@1585 -- # waitforlisten 1263809 00:03:21.202 14:13:10 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:21.202 14:13:10 -- common/autotest_common.sh@835 -- # '[' -z 1263809 ']' 00:03:21.202 14:13:10 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:21.202 14:13:10 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:21.202 14:13:10 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:21.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:21.202 14:13:10 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:21.202 14:13:10 -- common/autotest_common.sh@10 -- # set +x 00:03:21.463 [2024-11-17 14:13:10.445038] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:03:21.463 [2024-11-17 14:13:10.445089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263809 ] 00:03:21.463 [2024-11-17 14:13:10.522588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:21.463 [2024-11-17 14:13:10.564401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:21.722 14:13:10 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:21.722 14:13:10 -- common/autotest_common.sh@868 -- # return 0 00:03:21.722 14:13:10 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:21.722 14:13:10 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:21.722 14:13:10 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:25.009 nvme0n1 00:03:25.009 14:13:13 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:25.009 [2024-11-17 14:13:13.988074] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:25.009 request: 00:03:25.009 { 00:03:25.009 "nvme_ctrlr_name": "nvme0", 00:03:25.009 "password": "test", 00:03:25.009 "method": "bdev_nvme_opal_revert", 00:03:25.009 "req_id": 1 00:03:25.009 } 00:03:25.009 Got JSON-RPC error response 00:03:25.009 response: 00:03:25.009 { 00:03:25.009 "code": -32602, 00:03:25.009 "message": "Invalid parameters" 00:03:25.009 } 00:03:25.009 14:13:14 -- common/autotest_common.sh@1591 -- # true 00:03:25.009 14:13:14 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:25.009 14:13:14 -- common/autotest_common.sh@1595 -- # killprocess 1263809 00:03:25.009 14:13:14 -- common/autotest_common.sh@954 -- # '[' -z 1263809 ']' 00:03:25.009 14:13:14 -- common/autotest_common.sh@958 -- # kill -0 1263809 00:03:25.009 14:13:14 -- common/autotest_common.sh@959 -- # uname 00:03:25.009 14:13:14 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:25.009 14:13:14 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1263809 00:03:25.009 14:13:14 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:25.009 14:13:14 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:25.009 14:13:14 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1263809' 00:03:25.009 killing process with pid 1263809 00:03:25.009 14:13:14 -- common/autotest_common.sh@973 -- # kill 1263809 00:03:25.009 14:13:14 -- common/autotest_common.sh@978 -- # wait 1263809 00:03:26.914 14:13:15 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:26.914 14:13:15 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:26.914 14:13:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:26.915 14:13:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:26.915 14:13:15 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:26.915 14:13:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:26.915 14:13:15 -- common/autotest_common.sh@10 -- # set +x 00:03:26.915 14:13:15 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:26.915 14:13:15 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:26.915 14:13:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:26.915 14:13:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:26.915 14:13:15 -- common/autotest_common.sh@10 -- # set +x 00:03:26.915 ************************************ 00:03:26.915 START TEST env 00:03:26.915 ************************************ 00:03:26.915 14:13:15 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:26.915 * Looking for test storage... 00:03:26.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:26.915 14:13:15 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:26.915 14:13:15 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:26.915 14:13:15 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:26.915 14:13:15 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:26.915 14:13:15 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:26.915 14:13:15 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:26.915 14:13:15 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:26.915 14:13:15 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:26.915 14:13:15 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:26.915 14:13:15 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:26.915 14:13:15 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:26.915 14:13:15 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:26.915 14:13:15 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:26.915 14:13:15 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:26.915 14:13:15 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:26.915 14:13:15 env -- scripts/common.sh@344 -- # case "$op" in 00:03:26.915 14:13:15 env -- scripts/common.sh@345 -- # : 1 00:03:26.915 14:13:15 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:26.915 14:13:15 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:26.915 14:13:15 env -- scripts/common.sh@365 -- # decimal 1 00:03:26.915 14:13:15 env -- scripts/common.sh@353 -- # local d=1 00:03:26.915 14:13:15 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:26.915 14:13:15 env -- scripts/common.sh@355 -- # echo 1 00:03:26.915 14:13:15 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:26.915 14:13:15 env -- scripts/common.sh@366 -- # decimal 2 00:03:26.915 14:13:15 env -- scripts/common.sh@353 -- # local d=2 00:03:26.915 14:13:15 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:26.915 14:13:15 env -- scripts/common.sh@355 -- # echo 2 00:03:26.915 14:13:15 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:26.915 14:13:15 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:26.915 14:13:15 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:26.915 14:13:15 env -- scripts/common.sh@368 -- # return 0 00:03:26.915 14:13:15 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:26.915 14:13:15 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:26.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.915 --rc genhtml_branch_coverage=1 00:03:26.915 --rc genhtml_function_coverage=1 00:03:26.915 --rc genhtml_legend=1 00:03:26.915 --rc geninfo_all_blocks=1 00:03:26.915 --rc geninfo_unexecuted_blocks=1 00:03:26.915 00:03:26.915 ' 00:03:26.915 14:13:15 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:26.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.915 --rc genhtml_branch_coverage=1 00:03:26.915 --rc genhtml_function_coverage=1 00:03:26.915 --rc genhtml_legend=1 00:03:26.915 --rc geninfo_all_blocks=1 00:03:26.915 --rc geninfo_unexecuted_blocks=1 00:03:26.915 00:03:26.915 ' 00:03:26.915 14:13:15 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:26.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.915 --rc genhtml_branch_coverage=1 00:03:26.915 --rc genhtml_function_coverage=1 00:03:26.915 --rc genhtml_legend=1 00:03:26.915 --rc geninfo_all_blocks=1 00:03:26.915 --rc geninfo_unexecuted_blocks=1 00:03:26.915 00:03:26.915 ' 00:03:26.915 14:13:15 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:26.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.915 --rc genhtml_branch_coverage=1 00:03:26.915 --rc genhtml_function_coverage=1 00:03:26.915 --rc genhtml_legend=1 00:03:26.915 --rc geninfo_all_blocks=1 00:03:26.915 --rc geninfo_unexecuted_blocks=1 00:03:26.915 00:03:26.915 ' 00:03:26.915 14:13:15 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:26.915 14:13:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:26.915 14:13:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:26.915 14:13:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.915 ************************************ 00:03:26.915 START TEST env_memory 00:03:26.915 ************************************ 00:03:26.915 14:13:15 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:26.915 00:03:26.915 00:03:26.915 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.915 http://cunit.sourceforge.net/ 00:03:26.915 00:03:26.915 00:03:26.915 Suite: memory 00:03:26.915 Test: alloc and free memory map ...[2024-11-17 14:13:15.960146] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:26.915 passed 00:03:26.915 Test: mem map translation ...[2024-11-17 14:13:15.979133] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:26.915 [2024-11-17 14:13:15.979147] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:26.915 [2024-11-17 14:13:15.979181] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:26.915 [2024-11-17 14:13:15.979187] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:26.915 passed 00:03:26.915 Test: mem map registration ...[2024-11-17 14:13:16.015925] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:26.915 [2024-11-17 14:13:16.015939] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:26.915 passed 00:03:26.915 Test: mem map adjacent registrations ...passed 00:03:26.915 00:03:26.915 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.915 suites 1 1 n/a 0 0 00:03:26.915 tests 4 4 4 0 0 00:03:26.915 asserts 152 152 152 0 n/a 00:03:26.915 00:03:26.915 Elapsed time = 0.137 seconds 00:03:26.915 00:03:26.915 real 0m0.150s 00:03:26.915 user 0m0.139s 00:03:26.915 sys 0m0.010s 00:03:26.915 14:13:16 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:26.915 14:13:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:26.915 ************************************ 00:03:26.915 END TEST env_memory 00:03:26.915 ************************************ 00:03:26.915 14:13:16 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:26.915 14:13:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:26.915 14:13:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:26.915 14:13:16 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.915 ************************************ 00:03:26.915 START TEST env_vtophys 00:03:26.915 ************************************ 00:03:26.915 14:13:16 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:27.175 EAL: lib.eal log level changed from notice to debug 00:03:27.175 EAL: Detected lcore 0 as core 0 on socket 0 00:03:27.175 EAL: Detected lcore 1 as core 1 on socket 0 00:03:27.175 EAL: Detected lcore 2 as core 2 on socket 0 00:03:27.175 EAL: Detected lcore 3 as core 3 on socket 0 00:03:27.175 EAL: Detected lcore 4 as core 4 on socket 0 00:03:27.175 EAL: Detected lcore 5 as core 5 on socket 0 00:03:27.175 EAL: Detected lcore 6 as core 6 on socket 0 00:03:27.175 EAL: Detected lcore 7 as core 8 on socket 0 00:03:27.175 EAL: Detected lcore 8 as core 9 on socket 0 00:03:27.176 EAL: Detected lcore 9 as core 10 on socket 0 00:03:27.176 EAL: Detected lcore 10 as core 11 on socket 0 00:03:27.176 EAL: Detected lcore 11 as core 12 on socket 0 00:03:27.176 EAL: Detected lcore 12 as core 13 on socket 0 00:03:27.176 EAL: Detected lcore 13 as core 16 on socket 0 00:03:27.176 EAL: Detected lcore 14 as core 17 on socket 0 00:03:27.176 EAL: Detected lcore 15 as core 18 on socket 0 00:03:27.176 EAL: Detected lcore 16 as core 19 on socket 0 00:03:27.176 EAL: Detected lcore 17 as core 20 on socket 0 00:03:27.176 EAL: Detected lcore 18 as core 21 on socket 0 00:03:27.176 EAL: Detected lcore 19 as core 25 on socket 0 00:03:27.176 EAL: Detected lcore 20 as core 26 on socket 0 00:03:27.176 EAL: Detected lcore 21 as core 27 on socket 0 00:03:27.176 EAL: Detected lcore 22 as core 28 on socket 0 00:03:27.176 EAL: Detected lcore 23 as core 29 on socket 0 00:03:27.176 EAL: Detected lcore 24 as core 0 on socket 1 00:03:27.176 EAL: Detected lcore 25 as core 1 on socket 1 00:03:27.176 EAL: Detected lcore 26 as core 2 on socket 1 00:03:27.176 EAL: Detected lcore 27 as core 3 on socket 1 00:03:27.176 EAL: Detected lcore 28 as core 4 on socket 1 00:03:27.176 EAL: Detected lcore 29 as core 5 on socket 1 00:03:27.176 EAL: Detected lcore 30 as core 6 on socket 1 00:03:27.176 EAL: Detected lcore 31 as core 9 on socket 1 00:03:27.176 EAL: Detected lcore 32 as core 10 on socket 1 00:03:27.176 EAL: Detected lcore 33 as core 11 on socket 1 00:03:27.176 EAL: Detected lcore 34 as core 12 on socket 1 00:03:27.176 EAL: Detected lcore 35 as core 13 on socket 1 00:03:27.176 EAL: Detected lcore 36 as core 16 on socket 1 00:03:27.176 EAL: Detected lcore 37 as core 17 on socket 1 00:03:27.176 EAL: Detected lcore 38 as core 18 on socket 1 00:03:27.176 EAL: Detected lcore 39 as core 19 on socket 1 00:03:27.176 EAL: Detected lcore 40 as core 20 on socket 1 00:03:27.176 EAL: Detected lcore 41 as core 21 on socket 1 00:03:27.176 EAL: Detected lcore 42 as core 24 on socket 1 00:03:27.176 EAL: Detected lcore 43 as core 25 on socket 1 00:03:27.176 EAL: Detected lcore 44 as core 26 on socket 1 00:03:27.176 EAL: Detected lcore 45 as core 27 on socket 1 00:03:27.176 EAL: Detected lcore 46 as core 28 on socket 1 00:03:27.176 EAL: Detected lcore 47 as core 29 on socket 1 00:03:27.176 EAL: Detected lcore 48 as core 0 on socket 0 00:03:27.176 EAL: Detected lcore 49 as core 1 on socket 0 00:03:27.176 EAL: Detected lcore 50 as core 2 on socket 0 00:03:27.176 EAL: Detected lcore 51 as core 3 on socket 0 00:03:27.176 EAL: Detected lcore 52 as core 4 on socket 0 00:03:27.176 EAL: Detected lcore 53 as core 5 on socket 0 00:03:27.176 EAL: Detected lcore 54 as core 6 on socket 0 00:03:27.176 EAL: Detected lcore 55 as core 8 on socket 0 00:03:27.176 EAL: Detected lcore 56 as core 9 on socket 0 00:03:27.176 EAL: Detected lcore 57 as core 10 on socket 0 00:03:27.176 EAL: Detected lcore 58 as core 11 on socket 0 00:03:27.176 EAL: Detected lcore 59 as core 12 on socket 0 00:03:27.176 EAL: Detected lcore 60 as core 13 on socket 0 00:03:27.176 EAL: Detected lcore 61 as core 16 on socket 0 00:03:27.176 EAL: Detected lcore 62 as core 17 on socket 0 00:03:27.176 EAL: Detected lcore 63 as core 18 on socket 0 00:03:27.176 EAL: Detected lcore 64 as core 19 on socket 0 00:03:27.176 EAL: Detected lcore 65 as core 20 on socket 0 00:03:27.176 EAL: Detected lcore 66 as core 21 on socket 0 00:03:27.176 EAL: Detected lcore 67 as core 25 on socket 0 00:03:27.176 EAL: Detected lcore 68 as core 26 on socket 0 00:03:27.176 EAL: Detected lcore 69 as core 27 on socket 0 00:03:27.176 EAL: Detected lcore 70 as core 28 on socket 0 00:03:27.176 EAL: Detected lcore 71 as core 29 on socket 0 00:03:27.176 EAL: Detected lcore 72 as core 0 on socket 1 00:03:27.176 EAL: Detected lcore 73 as core 1 on socket 1 00:03:27.176 EAL: Detected lcore 74 as core 2 on socket 1 00:03:27.176 EAL: Detected lcore 75 as core 3 on socket 1 00:03:27.176 EAL: Detected lcore 76 as core 4 on socket 1 00:03:27.176 EAL: Detected lcore 77 as core 5 on socket 1 00:03:27.176 EAL: Detected lcore 78 as core 6 on socket 1 00:03:27.176 EAL: Detected lcore 79 as core 9 on socket 1 00:03:27.176 EAL: Detected lcore 80 as core 10 on socket 1 00:03:27.176 EAL: Detected lcore 81 as core 11 on socket 1 00:03:27.176 EAL: Detected lcore 82 as core 12 on socket 1 00:03:27.176 EAL: Detected lcore 83 as core 13 on socket 1 00:03:27.176 EAL: Detected lcore 84 as core 16 on socket 1 00:03:27.176 EAL: Detected lcore 85 as core 17 on socket 1 00:03:27.176 EAL: Detected lcore 86 as core 18 on socket 1 00:03:27.176 EAL: Detected lcore 87 as core 19 on socket 1 00:03:27.176 EAL: Detected lcore 88 as core 20 on socket 1 00:03:27.176 EAL: Detected lcore 89 as core 21 on socket 1 00:03:27.176 EAL: Detected lcore 90 as core 24 on socket 1 00:03:27.176 EAL: Detected lcore 91 as core 25 on socket 1 00:03:27.176 EAL: Detected lcore 92 as core 26 on socket 1 00:03:27.176 EAL: Detected lcore 93 as core 27 on socket 1 00:03:27.176 EAL: Detected lcore 94 as core 28 on socket 1 00:03:27.176 EAL: Detected lcore 95 as core 29 on socket 1 00:03:27.176 EAL: Maximum logical cores by configuration: 128 00:03:27.176 EAL: Detected CPU lcores: 96 00:03:27.176 EAL: Detected NUMA nodes: 2 00:03:27.176 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:27.176 EAL: Detected shared linkage of DPDK 00:03:27.176 EAL: No shared files mode enabled, IPC will be disabled 00:03:27.176 EAL: Bus pci wants IOVA as 'DC' 00:03:27.176 EAL: Buses did not request a specific IOVA mode. 00:03:27.176 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:27.176 EAL: Selected IOVA mode 'VA' 00:03:27.176 EAL: Probing VFIO support... 00:03:27.176 EAL: IOMMU type 1 (Type 1) is supported 00:03:27.176 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:27.176 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:27.176 EAL: VFIO support initialized 00:03:27.176 EAL: Ask a virtual area of 0x2e000 bytes 00:03:27.176 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:27.176 EAL: Setting up physically contiguous memory... 00:03:27.176 EAL: Setting maximum number of open files to 524288 00:03:27.176 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:27.176 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:27.176 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:27.176 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.176 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:27.176 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:27.176 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.176 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:27.176 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:27.176 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.176 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:27.176 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:27.176 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.176 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:27.176 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:27.176 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.176 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:27.176 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:27.176 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.176 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:27.176 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:27.176 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.176 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:27.176 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:27.176 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.176 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:27.176 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:27.176 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:27.176 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.176 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:27.176 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:27.176 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.176 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:27.176 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:27.176 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.176 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:27.176 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:27.176 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.176 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:27.176 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:27.176 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.176 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:27.176 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:27.176 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.176 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:27.176 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:27.176 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.176 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:27.176 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:27.176 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.176 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:27.176 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:27.176 EAL: Hugepages will be freed exactly as allocated. 00:03:27.176 EAL: No shared files mode enabled, IPC is disabled 00:03:27.176 EAL: No shared files mode enabled, IPC is disabled 00:03:27.176 EAL: TSC frequency is ~2300000 KHz 00:03:27.176 EAL: Main lcore 0 is ready (tid=7f8cbb0aaa00;cpuset=[0]) 00:03:27.176 EAL: Trying to obtain current memory policy. 00:03:27.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.176 EAL: Restoring previous memory policy: 0 00:03:27.176 EAL: request: mp_malloc_sync 00:03:27.176 EAL: No shared files mode enabled, IPC is disabled 00:03:27.176 EAL: Heap on socket 0 was expanded by 2MB 00:03:27.176 EAL: No shared files mode enabled, IPC is disabled 00:03:27.176 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:27.176 EAL: Mem event callback 'spdk:(nil)' registered 00:03:27.176 00:03:27.176 00:03:27.176 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.176 http://cunit.sourceforge.net/ 00:03:27.176 00:03:27.176 00:03:27.176 Suite: components_suite 00:03:27.176 Test: vtophys_malloc_test ...passed 00:03:27.176 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:27.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.177 EAL: Restoring previous memory policy: 4 00:03:27.177 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.177 EAL: request: mp_malloc_sync 00:03:27.177 EAL: No shared files mode enabled, IPC is disabled 00:03:27.177 EAL: Heap on socket 0 was expanded by 4MB 00:03:27.177 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.177 EAL: request: mp_malloc_sync 00:03:27.177 EAL: No shared files mode enabled, IPC is disabled 00:03:27.177 EAL: Heap on socket 0 was shrunk by 4MB 00:03:27.177 EAL: Trying to obtain current memory policy. 00:03:27.177 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.177 EAL: Restoring previous memory policy: 4 00:03:27.177 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.177 EAL: request: mp_malloc_sync 00:03:27.177 EAL: No shared files mode enabled, IPC is disabled 00:03:27.177 EAL: Heap on socket 0 was expanded by 6MB 00:03:27.177 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.177 EAL: request: mp_malloc_sync 00:03:27.177 EAL: No shared files mode enabled, IPC is disabled 00:03:27.177 EAL: Heap on socket 0 was shrunk by 6MB 00:03:27.177 EAL: Trying to obtain current memory policy. 00:03:27.177 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.177 EAL: Restoring previous memory policy: 4 00:03:27.177 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.177 EAL: request: mp_malloc_sync 00:03:27.177 EAL: No shared files mode enabled, IPC is disabled 00:03:27.177 EAL: Heap on socket 0 was expanded by 10MB 00:03:27.177 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.177 EAL: request: mp_malloc_sync 00:03:27.177 EAL: No shared files mode enabled, IPC is disabled 00:03:27.177 EAL: Heap on socket 0 was shrunk by 10MB 00:03:27.177 EAL: Trying to obtain current memory policy. 00:03:27.177 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.177 EAL: Restoring previous memory policy: 4 00:03:27.177 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.177 EAL: request: mp_malloc_sync 00:03:27.177 EAL: No shared files mode enabled, IPC is disabled 00:03:27.177 EAL: Heap on socket 0 was expanded by 18MB 00:03:27.177 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.177 EAL: request: mp_malloc_sync 00:03:27.177 EAL: No shared files mode enabled, IPC is disabled 00:03:27.177 EAL: Heap on socket 0 was shrunk by 18MB 00:03:27.177 EAL: Trying to obtain current memory policy. 00:03:27.177 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.177 EAL: Restoring previous memory policy: 4 00:03:27.177 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.177 EAL: request: mp_malloc_sync 00:03:27.177 EAL: No shared files mode enabled, IPC is disabled 00:03:27.177 EAL: Heap on socket 0 was expanded by 34MB 00:03:27.177 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.177 EAL: request: mp_malloc_sync 00:03:27.177 EAL: No shared files mode enabled, IPC is disabled 00:03:27.177 EAL: Heap on socket 0 was shrunk by 34MB 00:03:27.177 EAL: Trying to obtain current memory policy. 00:03:27.177 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.177 EAL: Restoring previous memory policy: 4 00:03:27.177 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.177 EAL: request: mp_malloc_sync 00:03:27.177 EAL: No shared files mode enabled, IPC is disabled 00:03:27.177 EAL: Heap on socket 0 was expanded by 66MB 00:03:27.177 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.177 EAL: request: mp_malloc_sync 00:03:27.177 EAL: No shared files mode enabled, IPC is disabled 00:03:27.177 EAL: Heap on socket 0 was shrunk by 66MB 00:03:27.177 EAL: Trying to obtain current memory policy. 00:03:27.177 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.177 EAL: Restoring previous memory policy: 4 00:03:27.177 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.177 EAL: request: mp_malloc_sync 00:03:27.177 EAL: No shared files mode enabled, IPC is disabled 00:03:27.177 EAL: Heap on socket 0 was expanded by 130MB 00:03:27.177 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.177 EAL: request: mp_malloc_sync 00:03:27.177 EAL: No shared files mode enabled, IPC is disabled 00:03:27.177 EAL: Heap on socket 0 was shrunk by 130MB 00:03:27.177 EAL: Trying to obtain current memory policy. 00:03:27.177 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.177 EAL: Restoring previous memory policy: 4 00:03:27.177 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.177 EAL: request: mp_malloc_sync 00:03:27.177 EAL: No shared files mode enabled, IPC is disabled 00:03:27.177 EAL: Heap on socket 0 was expanded by 258MB 00:03:27.437 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.437 EAL: request: mp_malloc_sync 00:03:27.437 EAL: No shared files mode enabled, IPC is disabled 00:03:27.437 EAL: Heap on socket 0 was shrunk by 258MB 00:03:27.437 EAL: Trying to obtain current memory policy. 00:03:27.437 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.437 EAL: Restoring previous memory policy: 4 00:03:27.437 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.437 EAL: request: mp_malloc_sync 00:03:27.437 EAL: No shared files mode enabled, IPC is disabled 00:03:27.437 EAL: Heap on socket 0 was expanded by 514MB 00:03:27.437 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.696 EAL: request: mp_malloc_sync 00:03:27.696 EAL: No shared files mode enabled, IPC is disabled 00:03:27.696 EAL: Heap on socket 0 was shrunk by 514MB 00:03:27.696 EAL: Trying to obtain current memory policy. 00:03:27.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.696 EAL: Restoring previous memory policy: 4 00:03:27.696 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.696 EAL: request: mp_malloc_sync 00:03:27.696 EAL: No shared files mode enabled, IPC is disabled 00:03:27.696 EAL: Heap on socket 0 was expanded by 1026MB 00:03:27.956 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.215 EAL: request: mp_malloc_sync 00:03:28.215 EAL: No shared files mode enabled, IPC is disabled 00:03:28.215 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:28.215 passed 00:03:28.215 00:03:28.215 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.215 suites 1 1 n/a 0 0 00:03:28.215 tests 2 2 2 0 0 00:03:28.215 asserts 497 497 497 0 n/a 00:03:28.215 00:03:28.215 Elapsed time = 0.970 seconds 00:03:28.215 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.215 EAL: request: mp_malloc_sync 00:03:28.215 EAL: No shared files mode enabled, IPC is disabled 00:03:28.215 EAL: Heap on socket 0 was shrunk by 2MB 00:03:28.215 EAL: No shared files mode enabled, IPC is disabled 00:03:28.215 EAL: No shared files mode enabled, IPC is disabled 00:03:28.215 EAL: No shared files mode enabled, IPC is disabled 00:03:28.215 00:03:28.215 real 0m1.093s 00:03:28.215 user 0m0.648s 00:03:28.215 sys 0m0.419s 00:03:28.215 14:13:17 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:28.215 14:13:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:28.215 ************************************ 00:03:28.215 END TEST env_vtophys 00:03:28.215 ************************************ 00:03:28.215 14:13:17 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:28.215 14:13:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:28.215 14:13:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:28.215 14:13:17 env -- common/autotest_common.sh@10 -- # set +x 00:03:28.215 ************************************ 00:03:28.215 START TEST env_pci 00:03:28.215 ************************************ 00:03:28.215 14:13:17 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:28.215 00:03:28.215 00:03:28.215 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.215 http://cunit.sourceforge.net/ 00:03:28.215 00:03:28.215 00:03:28.215 Suite: pci 00:03:28.215 Test: pci_hook ...[2024-11-17 14:13:17.310886] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1265059 has claimed it 00:03:28.215 EAL: Cannot find device (10000:00:01.0) 00:03:28.215 EAL: Failed to attach device on primary process 00:03:28.215 passed 00:03:28.215 00:03:28.215 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.215 suites 1 1 n/a 0 0 00:03:28.215 tests 1 1 1 0 0 00:03:28.215 asserts 25 25 25 0 n/a 00:03:28.215 00:03:28.215 Elapsed time = 0.025 seconds 00:03:28.215 00:03:28.215 real 0m0.044s 00:03:28.215 user 0m0.014s 00:03:28.215 sys 0m0.030s 00:03:28.215 14:13:17 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:28.215 14:13:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:28.215 ************************************ 00:03:28.215 END TEST env_pci 00:03:28.215 ************************************ 00:03:28.215 14:13:17 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:28.215 14:13:17 env -- env/env.sh@15 -- # uname 00:03:28.215 14:13:17 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:28.215 14:13:17 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:28.215 14:13:17 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:28.215 14:13:17 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:28.215 14:13:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:28.215 14:13:17 env -- common/autotest_common.sh@10 -- # set +x 00:03:28.215 ************************************ 00:03:28.215 START TEST env_dpdk_post_init 00:03:28.215 ************************************ 00:03:28.215 14:13:17 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:28.474 EAL: Detected CPU lcores: 96 00:03:28.474 EAL: Detected NUMA nodes: 2 00:03:28.475 EAL: Detected shared linkage of DPDK 00:03:28.475 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:28.475 EAL: Selected IOVA mode 'VA' 00:03:28.475 EAL: VFIO support initialized 00:03:28.475 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:28.475 EAL: Using IOMMU type 1 (Type 1) 00:03:28.475 EAL: Ignore mapping IO port bar(1) 00:03:28.475 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:28.475 EAL: Ignore mapping IO port bar(1) 00:03:28.475 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:28.475 EAL: Ignore mapping IO port bar(1) 00:03:28.475 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:28.475 EAL: Ignore mapping IO port bar(1) 00:03:28.475 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:28.475 EAL: Ignore mapping IO port bar(1) 00:03:28.475 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:28.475 EAL: Ignore mapping IO port bar(1) 00:03:28.475 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:28.475 EAL: Ignore mapping IO port bar(1) 00:03:28.475 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:28.475 EAL: Ignore mapping IO port bar(1) 00:03:28.475 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:29.412 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:29.412 EAL: Ignore mapping IO port bar(1) 00:03:29.412 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:29.412 EAL: Ignore mapping IO port bar(1) 00:03:29.412 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:29.412 EAL: Ignore mapping IO port bar(1) 00:03:29.412 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:29.412 EAL: Ignore mapping IO port bar(1) 00:03:29.412 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:29.412 EAL: Ignore mapping IO port bar(1) 00:03:29.412 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:29.412 EAL: Ignore mapping IO port bar(1) 00:03:29.412 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:29.412 EAL: Ignore mapping IO port bar(1) 00:03:29.412 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:29.412 EAL: Ignore mapping IO port bar(1) 00:03:29.412 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:32.701 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:32.701 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:32.701 Starting DPDK initialization... 00:03:32.701 Starting SPDK post initialization... 00:03:32.701 SPDK NVMe probe 00:03:32.701 Attaching to 0000:5e:00.0 00:03:32.701 Attached to 0000:5e:00.0 00:03:32.701 Cleaning up... 00:03:32.701 00:03:32.701 real 0m4.346s 00:03:32.701 user 0m2.959s 00:03:32.701 sys 0m0.457s 00:03:32.701 14:13:21 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:32.701 14:13:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:32.701 ************************************ 00:03:32.701 END TEST env_dpdk_post_init 00:03:32.701 ************************************ 00:03:32.701 14:13:21 env -- env/env.sh@26 -- # uname 00:03:32.701 14:13:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:32.701 14:13:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:32.701 14:13:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:32.701 14:13:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:32.701 14:13:21 env -- common/autotest_common.sh@10 -- # set +x 00:03:32.701 ************************************ 00:03:32.701 START TEST env_mem_callbacks 00:03:32.701 ************************************ 00:03:32.701 14:13:21 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:32.701 EAL: Detected CPU lcores: 96 00:03:32.701 EAL: Detected NUMA nodes: 2 00:03:32.701 EAL: Detected shared linkage of DPDK 00:03:32.701 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:32.701 EAL: Selected IOVA mode 'VA' 00:03:32.701 EAL: VFIO support initialized 00:03:32.701 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:32.701 00:03:32.701 00:03:32.701 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.701 http://cunit.sourceforge.net/ 00:03:32.701 00:03:32.701 00:03:32.701 Suite: memory 00:03:32.701 Test: test ... 00:03:32.701 register 0x200000200000 2097152 00:03:32.701 malloc 3145728 00:03:32.701 register 0x200000400000 4194304 00:03:32.701 buf 0x200000500000 len 3145728 PASSED 00:03:32.701 malloc 64 00:03:32.701 buf 0x2000004fff40 len 64 PASSED 00:03:32.701 malloc 4194304 00:03:32.701 register 0x200000800000 6291456 00:03:32.701 buf 0x200000a00000 len 4194304 PASSED 00:03:32.701 free 0x200000500000 3145728 00:03:32.701 free 0x2000004fff40 64 00:03:32.701 unregister 0x200000400000 4194304 PASSED 00:03:32.701 free 0x200000a00000 4194304 00:03:32.701 unregister 0x200000800000 6291456 PASSED 00:03:32.701 malloc 8388608 00:03:32.701 register 0x200000400000 10485760 00:03:32.701 buf 0x200000600000 len 8388608 PASSED 00:03:32.701 free 0x200000600000 8388608 00:03:32.701 unregister 0x200000400000 10485760 PASSED 00:03:32.701 passed 00:03:32.701 00:03:32.701 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.701 suites 1 1 n/a 0 0 00:03:32.701 tests 1 1 1 0 0 00:03:32.701 asserts 15 15 15 0 n/a 00:03:32.701 00:03:32.701 Elapsed time = 0.008 seconds 00:03:32.701 00:03:32.701 real 0m0.053s 00:03:32.701 user 0m0.019s 00:03:32.701 sys 0m0.034s 00:03:32.701 14:13:21 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:32.701 14:13:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:32.701 ************************************ 00:03:32.701 END TEST env_mem_callbacks 00:03:32.701 ************************************ 00:03:32.961 00:03:32.961 real 0m6.223s 00:03:32.961 user 0m4.042s 00:03:32.961 sys 0m1.261s 00:03:32.961 14:13:21 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:32.961 14:13:21 env -- common/autotest_common.sh@10 -- # set +x 00:03:32.961 ************************************ 00:03:32.961 END TEST env 00:03:32.961 ************************************ 00:03:32.961 14:13:21 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:32.961 14:13:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:32.961 14:13:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:32.961 14:13:21 -- common/autotest_common.sh@10 -- # set +x 00:03:32.961 ************************************ 00:03:32.961 START TEST rpc 00:03:32.961 ************************************ 00:03:32.961 14:13:21 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:32.961 * Looking for test storage... 00:03:32.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:32.961 14:13:22 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:32.961 14:13:22 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:32.961 14:13:22 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:32.961 14:13:22 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:32.961 14:13:22 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:32.961 14:13:22 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:32.961 14:13:22 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:32.961 14:13:22 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:32.961 14:13:22 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:32.961 14:13:22 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:32.961 14:13:22 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:32.961 14:13:22 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:32.961 14:13:22 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:32.961 14:13:22 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:32.961 14:13:22 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:32.961 14:13:22 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:32.961 14:13:22 rpc -- scripts/common.sh@345 -- # : 1 00:03:32.961 14:13:22 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:32.961 14:13:22 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:32.961 14:13:22 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:32.961 14:13:22 rpc -- scripts/common.sh@353 -- # local d=1 00:03:32.961 14:13:22 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:32.961 14:13:22 rpc -- scripts/common.sh@355 -- # echo 1 00:03:32.961 14:13:22 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:32.961 14:13:22 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:32.961 14:13:22 rpc -- scripts/common.sh@353 -- # local d=2 00:03:32.961 14:13:22 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:32.961 14:13:22 rpc -- scripts/common.sh@355 -- # echo 2 00:03:32.961 14:13:22 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:32.961 14:13:22 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:32.961 14:13:22 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:32.961 14:13:22 rpc -- scripts/common.sh@368 -- # return 0 00:03:32.961 14:13:22 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:32.961 14:13:22 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:32.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.961 --rc genhtml_branch_coverage=1 00:03:32.961 --rc genhtml_function_coverage=1 00:03:32.961 --rc genhtml_legend=1 00:03:32.961 --rc geninfo_all_blocks=1 00:03:32.961 --rc geninfo_unexecuted_blocks=1 00:03:32.961 00:03:32.961 ' 00:03:32.961 14:13:22 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:32.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.961 --rc genhtml_branch_coverage=1 00:03:32.961 --rc genhtml_function_coverage=1 00:03:32.961 --rc genhtml_legend=1 00:03:32.961 --rc geninfo_all_blocks=1 00:03:32.961 --rc geninfo_unexecuted_blocks=1 00:03:32.961 00:03:32.961 ' 00:03:32.961 14:13:22 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:32.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.961 --rc genhtml_branch_coverage=1 00:03:32.961 --rc genhtml_function_coverage=1 00:03:32.961 --rc genhtml_legend=1 00:03:32.961 --rc geninfo_all_blocks=1 00:03:32.961 --rc geninfo_unexecuted_blocks=1 00:03:32.961 00:03:32.961 ' 00:03:32.961 14:13:22 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:32.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.961 --rc genhtml_branch_coverage=1 00:03:32.961 --rc genhtml_function_coverage=1 00:03:32.961 --rc genhtml_legend=1 00:03:32.961 --rc geninfo_all_blocks=1 00:03:32.961 --rc geninfo_unexecuted_blocks=1 00:03:32.961 00:03:32.961 ' 00:03:32.961 14:13:22 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1265948 00:03:32.961 14:13:22 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:32.961 14:13:22 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:32.961 14:13:22 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1265948 00:03:32.961 14:13:22 rpc -- common/autotest_common.sh@835 -- # '[' -z 1265948 ']' 00:03:32.961 14:13:22 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:32.961 14:13:22 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:32.961 14:13:22 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:32.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:32.961 14:13:22 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:32.961 14:13:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:33.221 [2024-11-17 14:13:22.221344] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:03:33.221 [2024-11-17 14:13:22.221404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265948 ] 00:03:33.221 [2024-11-17 14:13:22.296875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:33.221 [2024-11-17 14:13:22.338682] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:33.221 [2024-11-17 14:13:22.338717] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1265948' to capture a snapshot of events at runtime. 00:03:33.221 [2024-11-17 14:13:22.338725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:33.221 [2024-11-17 14:13:22.338733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:33.221 [2024-11-17 14:13:22.338738] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1265948 for offline analysis/debug. 00:03:33.221 [2024-11-17 14:13:22.339271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:33.480 14:13:22 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:33.480 14:13:22 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:33.480 14:13:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:33.480 14:13:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:33.480 14:13:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:33.480 14:13:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:33.480 14:13:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.480 14:13:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.480 14:13:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:33.480 ************************************ 00:03:33.480 START TEST rpc_integrity 00:03:33.480 ************************************ 00:03:33.480 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:33.480 14:13:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:33.480 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.480 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.480 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.480 14:13:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:33.480 14:13:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:33.480 14:13:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:33.480 14:13:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:33.481 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.481 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.481 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.481 14:13:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:33.481 14:13:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:33.481 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.481 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.481 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.481 14:13:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:33.481 { 00:03:33.481 "name": "Malloc0", 00:03:33.481 "aliases": [ 00:03:33.481 "de940003-5609-4be4-865a-91d005d65787" 00:03:33.481 ], 00:03:33.481 "product_name": "Malloc disk", 00:03:33.481 "block_size": 512, 00:03:33.481 "num_blocks": 16384, 00:03:33.481 "uuid": "de940003-5609-4be4-865a-91d005d65787", 00:03:33.481 "assigned_rate_limits": { 00:03:33.481 "rw_ios_per_sec": 0, 00:03:33.481 "rw_mbytes_per_sec": 0, 00:03:33.481 "r_mbytes_per_sec": 0, 00:03:33.481 "w_mbytes_per_sec": 0 00:03:33.481 }, 00:03:33.481 "claimed": false, 00:03:33.481 "zoned": false, 00:03:33.481 "supported_io_types": { 00:03:33.481 "read": true, 00:03:33.481 "write": true, 00:03:33.481 "unmap": true, 00:03:33.481 "flush": true, 00:03:33.481 "reset": true, 00:03:33.481 "nvme_admin": false, 00:03:33.481 "nvme_io": false, 00:03:33.481 "nvme_io_md": false, 00:03:33.481 "write_zeroes": true, 00:03:33.481 "zcopy": true, 00:03:33.481 "get_zone_info": false, 00:03:33.481 "zone_management": false, 00:03:33.481 "zone_append": false, 00:03:33.481 "compare": false, 00:03:33.481 "compare_and_write": false, 00:03:33.481 "abort": true, 00:03:33.481 "seek_hole": false, 00:03:33.481 "seek_data": false, 00:03:33.481 "copy": true, 00:03:33.481 "nvme_iov_md": false 00:03:33.481 }, 00:03:33.481 "memory_domains": [ 00:03:33.481 { 00:03:33.481 "dma_device_id": "system", 00:03:33.481 "dma_device_type": 1 00:03:33.481 }, 00:03:33.481 { 00:03:33.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:33.481 "dma_device_type": 2 00:03:33.481 } 00:03:33.481 ], 00:03:33.481 "driver_specific": {} 00:03:33.481 } 00:03:33.481 ]' 00:03:33.481 14:13:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:33.741 14:13:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:33.741 14:13:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:33.741 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.741 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.741 [2024-11-17 14:13:22.723696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:33.741 [2024-11-17 14:13:22.723726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:33.741 [2024-11-17 14:13:22.723739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6c96d0 00:03:33.741 [2024-11-17 14:13:22.723745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:33.741 [2024-11-17 14:13:22.724868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:33.741 [2024-11-17 14:13:22.724890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:33.741 Passthru0 00:03:33.741 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.741 14:13:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:33.741 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.741 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.741 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.741 14:13:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:33.741 { 00:03:33.741 "name": "Malloc0", 00:03:33.741 "aliases": [ 00:03:33.741 "de940003-5609-4be4-865a-91d005d65787" 00:03:33.741 ], 00:03:33.741 "product_name": "Malloc disk", 00:03:33.741 "block_size": 512, 00:03:33.741 "num_blocks": 16384, 00:03:33.741 "uuid": "de940003-5609-4be4-865a-91d005d65787", 00:03:33.741 "assigned_rate_limits": { 00:03:33.741 "rw_ios_per_sec": 0, 00:03:33.741 "rw_mbytes_per_sec": 0, 00:03:33.741 "r_mbytes_per_sec": 0, 00:03:33.741 "w_mbytes_per_sec": 0 00:03:33.741 }, 00:03:33.741 "claimed": true, 00:03:33.741 "claim_type": "exclusive_write", 00:03:33.741 "zoned": false, 00:03:33.741 "supported_io_types": { 00:03:33.741 "read": true, 00:03:33.741 "write": true, 00:03:33.741 "unmap": true, 00:03:33.741 "flush": true, 00:03:33.741 "reset": true, 00:03:33.741 "nvme_admin": false, 00:03:33.741 "nvme_io": false, 00:03:33.741 "nvme_io_md": false, 00:03:33.741 "write_zeroes": true, 00:03:33.741 "zcopy": true, 00:03:33.741 "get_zone_info": false, 00:03:33.741 "zone_management": false, 00:03:33.741 "zone_append": false, 00:03:33.741 "compare": false, 00:03:33.741 "compare_and_write": false, 00:03:33.741 "abort": true, 00:03:33.741 "seek_hole": false, 00:03:33.741 "seek_data": false, 00:03:33.741 "copy": true, 00:03:33.741 "nvme_iov_md": false 00:03:33.741 }, 00:03:33.741 "memory_domains": [ 00:03:33.741 { 00:03:33.741 "dma_device_id": "system", 00:03:33.741 "dma_device_type": 1 00:03:33.741 }, 00:03:33.741 { 00:03:33.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:33.741 "dma_device_type": 2 00:03:33.741 } 00:03:33.741 ], 00:03:33.741 "driver_specific": {} 00:03:33.741 }, 00:03:33.741 { 00:03:33.741 "name": "Passthru0", 00:03:33.741 "aliases": [ 00:03:33.741 "9a7ee6a5-051e-5c91-ae7c-c5b933c7704a" 00:03:33.741 ], 00:03:33.741 "product_name": "passthru", 00:03:33.741 "block_size": 512, 00:03:33.741 "num_blocks": 16384, 00:03:33.741 "uuid": "9a7ee6a5-051e-5c91-ae7c-c5b933c7704a", 00:03:33.741 "assigned_rate_limits": { 00:03:33.741 "rw_ios_per_sec": 0, 00:03:33.741 "rw_mbytes_per_sec": 0, 00:03:33.741 "r_mbytes_per_sec": 0, 00:03:33.741 "w_mbytes_per_sec": 0 00:03:33.741 }, 00:03:33.741 "claimed": false, 00:03:33.741 "zoned": false, 00:03:33.741 "supported_io_types": { 00:03:33.741 "read": true, 00:03:33.741 "write": true, 00:03:33.741 "unmap": true, 00:03:33.741 "flush": true, 00:03:33.741 "reset": true, 00:03:33.741 "nvme_admin": false, 00:03:33.741 "nvme_io": false, 00:03:33.741 "nvme_io_md": false, 00:03:33.741 "write_zeroes": true, 00:03:33.741 "zcopy": true, 00:03:33.741 "get_zone_info": false, 00:03:33.741 "zone_management": false, 00:03:33.741 "zone_append": false, 00:03:33.741 "compare": false, 00:03:33.741 "compare_and_write": false, 00:03:33.741 "abort": true, 00:03:33.741 "seek_hole": false, 00:03:33.741 "seek_data": false, 00:03:33.741 "copy": true, 00:03:33.741 "nvme_iov_md": false 00:03:33.741 }, 00:03:33.741 "memory_domains": [ 00:03:33.741 { 00:03:33.741 "dma_device_id": "system", 00:03:33.741 "dma_device_type": 1 00:03:33.741 }, 00:03:33.741 { 00:03:33.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:33.741 "dma_device_type": 2 00:03:33.741 } 00:03:33.741 ], 00:03:33.741 "driver_specific": { 00:03:33.741 "passthru": { 00:03:33.741 "name": "Passthru0", 00:03:33.741 "base_bdev_name": "Malloc0" 00:03:33.741 } 00:03:33.741 } 00:03:33.741 } 00:03:33.741 ]' 00:03:33.741 14:13:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:33.741 14:13:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:33.741 14:13:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:33.741 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.741 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.741 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.741 14:13:22 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:33.741 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.741 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.741 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.741 14:13:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:33.741 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.741 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.741 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.741 14:13:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:33.741 14:13:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:33.741 14:13:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:33.741 00:03:33.741 real 0m0.279s 00:03:33.741 user 0m0.168s 00:03:33.741 sys 0m0.045s 00:03:33.741 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:33.741 14:13:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.741 ************************************ 00:03:33.741 END TEST rpc_integrity 00:03:33.741 ************************************ 00:03:33.741 14:13:22 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:33.741 14:13:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.741 14:13:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.741 14:13:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:33.741 ************************************ 00:03:33.741 START TEST rpc_plugins 00:03:33.741 ************************************ 00:03:33.741 14:13:22 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:33.741 14:13:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:33.741 14:13:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.741 14:13:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:33.741 14:13:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.741 14:13:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:33.741 14:13:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:33.741 14:13:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.741 14:13:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.008 14:13:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.008 14:13:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:34.008 { 00:03:34.008 "name": "Malloc1", 00:03:34.008 "aliases": [ 00:03:34.008 "4f47cc54-f66d-4495-844d-660477335ae6" 00:03:34.008 ], 00:03:34.008 "product_name": "Malloc disk", 00:03:34.008 "block_size": 4096, 00:03:34.008 "num_blocks": 256, 00:03:34.008 "uuid": "4f47cc54-f66d-4495-844d-660477335ae6", 00:03:34.008 "assigned_rate_limits": { 00:03:34.008 "rw_ios_per_sec": 0, 00:03:34.008 "rw_mbytes_per_sec": 0, 00:03:34.008 "r_mbytes_per_sec": 0, 00:03:34.008 "w_mbytes_per_sec": 0 00:03:34.008 }, 00:03:34.008 "claimed": false, 00:03:34.008 "zoned": false, 00:03:34.008 "supported_io_types": { 00:03:34.008 "read": true, 00:03:34.008 "write": true, 00:03:34.008 "unmap": true, 00:03:34.008 "flush": true, 00:03:34.008 "reset": true, 00:03:34.008 "nvme_admin": false, 00:03:34.008 "nvme_io": false, 00:03:34.008 "nvme_io_md": false, 00:03:34.008 "write_zeroes": true, 00:03:34.008 "zcopy": true, 00:03:34.008 "get_zone_info": false, 00:03:34.008 "zone_management": false, 00:03:34.008 "zone_append": false, 00:03:34.008 "compare": false, 00:03:34.008 "compare_and_write": false, 00:03:34.008 "abort": true, 00:03:34.008 "seek_hole": false, 00:03:34.008 "seek_data": false, 00:03:34.008 "copy": true, 00:03:34.008 "nvme_iov_md": false 00:03:34.008 }, 00:03:34.008 "memory_domains": [ 00:03:34.008 { 00:03:34.008 "dma_device_id": "system", 00:03:34.008 "dma_device_type": 1 00:03:34.008 }, 00:03:34.008 { 00:03:34.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.008 "dma_device_type": 2 00:03:34.008 } 00:03:34.008 ], 00:03:34.008 "driver_specific": {} 00:03:34.008 } 00:03:34.008 ]' 00:03:34.008 14:13:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:34.008 14:13:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:34.008 14:13:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:34.008 14:13:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.008 14:13:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.008 14:13:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.008 14:13:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:34.008 14:13:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.008 14:13:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.008 14:13:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.008 14:13:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:34.008 14:13:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:34.008 14:13:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:34.008 00:03:34.008 real 0m0.139s 00:03:34.008 user 0m0.085s 00:03:34.008 sys 0m0.019s 00:03:34.008 14:13:23 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.008 14:13:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.008 ************************************ 00:03:34.008 END TEST rpc_plugins 00:03:34.008 ************************************ 00:03:34.008 14:13:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:34.008 14:13:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.008 14:13:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.008 14:13:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.008 ************************************ 00:03:34.008 START TEST rpc_trace_cmd_test 00:03:34.008 ************************************ 00:03:34.008 14:13:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:34.008 14:13:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:34.008 14:13:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:34.008 14:13:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.008 14:13:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:34.008 14:13:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.008 14:13:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:34.008 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1265948", 00:03:34.008 "tpoint_group_mask": "0x8", 00:03:34.008 "iscsi_conn": { 00:03:34.008 "mask": "0x2", 00:03:34.008 "tpoint_mask": "0x0" 00:03:34.008 }, 00:03:34.008 "scsi": { 00:03:34.008 "mask": "0x4", 00:03:34.008 "tpoint_mask": "0x0" 00:03:34.008 }, 00:03:34.008 "bdev": { 00:03:34.008 "mask": "0x8", 00:03:34.008 "tpoint_mask": "0xffffffffffffffff" 00:03:34.008 }, 00:03:34.008 "nvmf_rdma": { 00:03:34.008 "mask": "0x10", 00:03:34.008 "tpoint_mask": "0x0" 00:03:34.008 }, 00:03:34.008 "nvmf_tcp": { 00:03:34.008 "mask": "0x20", 00:03:34.008 "tpoint_mask": "0x0" 00:03:34.008 }, 00:03:34.008 "ftl": { 00:03:34.008 "mask": "0x40", 00:03:34.008 "tpoint_mask": "0x0" 00:03:34.008 }, 00:03:34.008 "blobfs": { 00:03:34.008 "mask": "0x80", 00:03:34.008 "tpoint_mask": "0x0" 00:03:34.008 }, 00:03:34.008 "dsa": { 00:03:34.008 "mask": "0x200", 00:03:34.008 "tpoint_mask": "0x0" 00:03:34.008 }, 00:03:34.008 "thread": { 00:03:34.008 "mask": "0x400", 00:03:34.008 "tpoint_mask": "0x0" 00:03:34.008 }, 00:03:34.008 "nvme_pcie": { 00:03:34.008 "mask": "0x800", 00:03:34.008 "tpoint_mask": "0x0" 00:03:34.008 }, 00:03:34.008 "iaa": { 00:03:34.008 "mask": "0x1000", 00:03:34.008 "tpoint_mask": "0x0" 00:03:34.008 }, 00:03:34.008 "nvme_tcp": { 00:03:34.008 "mask": "0x2000", 00:03:34.008 "tpoint_mask": "0x0" 00:03:34.008 }, 00:03:34.008 "bdev_nvme": { 00:03:34.008 "mask": "0x4000", 00:03:34.008 "tpoint_mask": "0x0" 00:03:34.008 }, 00:03:34.008 "sock": { 00:03:34.008 "mask": "0x8000", 00:03:34.008 "tpoint_mask": "0x0" 00:03:34.008 }, 00:03:34.008 "blob": { 00:03:34.008 "mask": "0x10000", 00:03:34.008 "tpoint_mask": "0x0" 00:03:34.008 }, 00:03:34.008 "bdev_raid": { 00:03:34.008 "mask": "0x20000", 00:03:34.008 "tpoint_mask": "0x0" 00:03:34.008 }, 00:03:34.008 "scheduler": { 00:03:34.008 "mask": "0x40000", 00:03:34.008 "tpoint_mask": "0x0" 00:03:34.008 } 00:03:34.008 }' 00:03:34.008 14:13:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:34.008 14:13:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:34.008 14:13:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:34.267 14:13:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:34.267 14:13:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:34.267 14:13:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:34.267 14:13:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:34.267 14:13:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:34.267 14:13:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:34.267 14:13:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:34.267 00:03:34.267 real 0m0.227s 00:03:34.267 user 0m0.189s 00:03:34.267 sys 0m0.028s 00:03:34.267 14:13:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.267 14:13:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:34.267 ************************************ 00:03:34.267 END TEST rpc_trace_cmd_test 00:03:34.267 ************************************ 00:03:34.267 14:13:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:34.267 14:13:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:34.267 14:13:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:34.267 14:13:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.267 14:13:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.267 14:13:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.267 ************************************ 00:03:34.267 START TEST rpc_daemon_integrity 00:03:34.267 ************************************ 00:03:34.267 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:34.267 14:13:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:34.267 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.267 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.267 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.267 14:13:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:34.267 14:13:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:34.526 14:13:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:34.526 14:13:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:34.526 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.526 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.526 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.526 14:13:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:34.526 14:13:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:34.526 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.526 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.526 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.526 14:13:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:34.526 { 00:03:34.526 "name": "Malloc2", 00:03:34.526 "aliases": [ 00:03:34.526 "46cb6f9c-0663-43c2-8596-e270d51b0b06" 00:03:34.526 ], 00:03:34.526 "product_name": "Malloc disk", 00:03:34.526 "block_size": 512, 00:03:34.526 "num_blocks": 16384, 00:03:34.526 "uuid": "46cb6f9c-0663-43c2-8596-e270d51b0b06", 00:03:34.526 "assigned_rate_limits": { 00:03:34.526 "rw_ios_per_sec": 0, 00:03:34.526 "rw_mbytes_per_sec": 0, 00:03:34.526 "r_mbytes_per_sec": 0, 00:03:34.526 "w_mbytes_per_sec": 0 00:03:34.526 }, 00:03:34.526 "claimed": false, 00:03:34.526 "zoned": false, 00:03:34.526 "supported_io_types": { 00:03:34.526 "read": true, 00:03:34.526 "write": true, 00:03:34.527 "unmap": true, 00:03:34.527 "flush": true, 00:03:34.527 "reset": true, 00:03:34.527 "nvme_admin": false, 00:03:34.527 "nvme_io": false, 00:03:34.527 "nvme_io_md": false, 00:03:34.527 "write_zeroes": true, 00:03:34.527 "zcopy": true, 00:03:34.527 "get_zone_info": false, 00:03:34.527 "zone_management": false, 00:03:34.527 "zone_append": false, 00:03:34.527 "compare": false, 00:03:34.527 "compare_and_write": false, 00:03:34.527 "abort": true, 00:03:34.527 "seek_hole": false, 00:03:34.527 "seek_data": false, 00:03:34.527 "copy": true, 00:03:34.527 "nvme_iov_md": false 00:03:34.527 }, 00:03:34.527 "memory_domains": [ 00:03:34.527 { 00:03:34.527 "dma_device_id": "system", 00:03:34.527 "dma_device_type": 1 00:03:34.527 }, 00:03:34.527 { 00:03:34.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.527 "dma_device_type": 2 00:03:34.527 } 00:03:34.527 ], 00:03:34.527 "driver_specific": {} 00:03:34.527 } 00:03:34.527 ]' 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.527 [2024-11-17 14:13:23.574038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:34.527 [2024-11-17 14:13:23.574065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:34.527 [2024-11-17 14:13:23.574077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x759e60 00:03:34.527 [2024-11-17 14:13:23.574084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:34.527 [2024-11-17 14:13:23.575200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:34.527 [2024-11-17 14:13:23.575223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:34.527 Passthru0 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:34.527 { 00:03:34.527 "name": "Malloc2", 00:03:34.527 "aliases": [ 00:03:34.527 "46cb6f9c-0663-43c2-8596-e270d51b0b06" 00:03:34.527 ], 00:03:34.527 "product_name": "Malloc disk", 00:03:34.527 "block_size": 512, 00:03:34.527 "num_blocks": 16384, 00:03:34.527 "uuid": "46cb6f9c-0663-43c2-8596-e270d51b0b06", 00:03:34.527 "assigned_rate_limits": { 00:03:34.527 "rw_ios_per_sec": 0, 00:03:34.527 "rw_mbytes_per_sec": 0, 00:03:34.527 "r_mbytes_per_sec": 0, 00:03:34.527 "w_mbytes_per_sec": 0 00:03:34.527 }, 00:03:34.527 "claimed": true, 00:03:34.527 "claim_type": "exclusive_write", 00:03:34.527 "zoned": false, 00:03:34.527 "supported_io_types": { 00:03:34.527 "read": true, 00:03:34.527 "write": true, 00:03:34.527 "unmap": true, 00:03:34.527 "flush": true, 00:03:34.527 "reset": true, 00:03:34.527 "nvme_admin": false, 00:03:34.527 "nvme_io": false, 00:03:34.527 "nvme_io_md": false, 00:03:34.527 "write_zeroes": true, 00:03:34.527 "zcopy": true, 00:03:34.527 "get_zone_info": false, 00:03:34.527 "zone_management": false, 00:03:34.527 "zone_append": false, 00:03:34.527 "compare": false, 00:03:34.527 "compare_and_write": false, 00:03:34.527 "abort": true, 00:03:34.527 "seek_hole": false, 00:03:34.527 "seek_data": false, 00:03:34.527 "copy": true, 00:03:34.527 "nvme_iov_md": false 00:03:34.527 }, 00:03:34.527 "memory_domains": [ 00:03:34.527 { 00:03:34.527 "dma_device_id": "system", 00:03:34.527 "dma_device_type": 1 00:03:34.527 }, 00:03:34.527 { 00:03:34.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.527 "dma_device_type": 2 00:03:34.527 } 00:03:34.527 ], 00:03:34.527 "driver_specific": {} 00:03:34.527 }, 00:03:34.527 { 00:03:34.527 "name": "Passthru0", 00:03:34.527 "aliases": [ 00:03:34.527 "8324ddf3-e6d7-5b07-b6b2-d5a28b167787" 00:03:34.527 ], 00:03:34.527 "product_name": "passthru", 00:03:34.527 "block_size": 512, 00:03:34.527 "num_blocks": 16384, 00:03:34.527 "uuid": "8324ddf3-e6d7-5b07-b6b2-d5a28b167787", 00:03:34.527 "assigned_rate_limits": { 00:03:34.527 "rw_ios_per_sec": 0, 00:03:34.527 "rw_mbytes_per_sec": 0, 00:03:34.527 "r_mbytes_per_sec": 0, 00:03:34.527 "w_mbytes_per_sec": 0 00:03:34.527 }, 00:03:34.527 "claimed": false, 00:03:34.527 "zoned": false, 00:03:34.527 "supported_io_types": { 00:03:34.527 "read": true, 00:03:34.527 "write": true, 00:03:34.527 "unmap": true, 00:03:34.527 "flush": true, 00:03:34.527 "reset": true, 00:03:34.527 "nvme_admin": false, 00:03:34.527 "nvme_io": false, 00:03:34.527 "nvme_io_md": false, 00:03:34.527 "write_zeroes": true, 00:03:34.527 "zcopy": true, 00:03:34.527 "get_zone_info": false, 00:03:34.527 "zone_management": false, 00:03:34.527 "zone_append": false, 00:03:34.527 "compare": false, 00:03:34.527 "compare_and_write": false, 00:03:34.527 "abort": true, 00:03:34.527 "seek_hole": false, 00:03:34.527 "seek_data": false, 00:03:34.527 "copy": true, 00:03:34.527 "nvme_iov_md": false 00:03:34.527 }, 00:03:34.527 "memory_domains": [ 00:03:34.527 { 00:03:34.527 "dma_device_id": "system", 00:03:34.527 "dma_device_type": 1 00:03:34.527 }, 00:03:34.527 { 00:03:34.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.527 "dma_device_type": 2 00:03:34.527 } 00:03:34.527 ], 00:03:34.527 "driver_specific": { 00:03:34.527 "passthru": { 00:03:34.527 "name": "Passthru0", 00:03:34.527 "base_bdev_name": "Malloc2" 00:03:34.527 } 00:03:34.527 } 00:03:34.527 } 00:03:34.527 ]' 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:34.527 00:03:34.527 real 0m0.278s 00:03:34.527 user 0m0.184s 00:03:34.527 sys 0m0.033s 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.527 14:13:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.527 ************************************ 00:03:34.527 END TEST rpc_daemon_integrity 00:03:34.527 ************************************ 00:03:34.787 14:13:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:34.787 14:13:23 rpc -- rpc/rpc.sh@84 -- # killprocess 1265948 00:03:34.787 14:13:23 rpc -- common/autotest_common.sh@954 -- # '[' -z 1265948 ']' 00:03:34.787 14:13:23 rpc -- common/autotest_common.sh@958 -- # kill -0 1265948 00:03:34.787 14:13:23 rpc -- common/autotest_common.sh@959 -- # uname 00:03:34.787 14:13:23 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:34.787 14:13:23 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1265948 00:03:34.787 14:13:23 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:34.787 14:13:23 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:34.787 14:13:23 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1265948' 00:03:34.787 killing process with pid 1265948 00:03:34.787 14:13:23 rpc -- common/autotest_common.sh@973 -- # kill 1265948 00:03:34.787 14:13:23 rpc -- common/autotest_common.sh@978 -- # wait 1265948 00:03:35.050 00:03:35.050 real 0m2.117s 00:03:35.050 user 0m2.713s 00:03:35.050 sys 0m0.698s 00:03:35.051 14:13:24 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.051 14:13:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.051 ************************************ 00:03:35.051 END TEST rpc 00:03:35.051 ************************************ 00:03:35.051 14:13:24 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:35.051 14:13:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.051 14:13:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.051 14:13:24 -- common/autotest_common.sh@10 -- # set +x 00:03:35.051 ************************************ 00:03:35.051 START TEST skip_rpc 00:03:35.051 ************************************ 00:03:35.051 14:13:24 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:35.051 * Looking for test storage... 00:03:35.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:35.312 14:13:24 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:35.312 14:13:24 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:35.312 14:13:24 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:35.312 14:13:24 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:35.312 14:13:24 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:35.312 14:13:24 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:35.312 14:13:24 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:35.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.312 --rc genhtml_branch_coverage=1 00:03:35.312 --rc genhtml_function_coverage=1 00:03:35.312 --rc genhtml_legend=1 00:03:35.312 --rc geninfo_all_blocks=1 00:03:35.312 --rc geninfo_unexecuted_blocks=1 00:03:35.312 00:03:35.312 ' 00:03:35.312 14:13:24 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:35.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.312 --rc genhtml_branch_coverage=1 00:03:35.312 --rc genhtml_function_coverage=1 00:03:35.312 --rc genhtml_legend=1 00:03:35.312 --rc geninfo_all_blocks=1 00:03:35.312 --rc geninfo_unexecuted_blocks=1 00:03:35.312 00:03:35.312 ' 00:03:35.312 14:13:24 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:35.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.312 --rc genhtml_branch_coverage=1 00:03:35.312 --rc genhtml_function_coverage=1 00:03:35.312 --rc genhtml_legend=1 00:03:35.312 --rc geninfo_all_blocks=1 00:03:35.312 --rc geninfo_unexecuted_blocks=1 00:03:35.312 00:03:35.312 ' 00:03:35.312 14:13:24 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:35.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.312 --rc genhtml_branch_coverage=1 00:03:35.312 --rc genhtml_function_coverage=1 00:03:35.312 --rc genhtml_legend=1 00:03:35.312 --rc geninfo_all_blocks=1 00:03:35.312 --rc geninfo_unexecuted_blocks=1 00:03:35.312 00:03:35.312 ' 00:03:35.312 14:13:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:35.312 14:13:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:35.312 14:13:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:35.312 14:13:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.312 14:13:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.312 14:13:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.312 ************************************ 00:03:35.312 START TEST skip_rpc 00:03:35.312 ************************************ 00:03:35.312 14:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:35.312 14:13:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1266589 00:03:35.312 14:13:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:35.312 14:13:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:35.312 14:13:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:35.312 [2024-11-17 14:13:24.444841] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:03:35.312 [2024-11-17 14:13:24.444881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1266589 ] 00:03:35.312 [2024-11-17 14:13:24.521284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:35.572 [2024-11-17 14:13:24.562810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1266589 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1266589 ']' 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1266589 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1266589 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:40.846 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:40.847 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1266589' 00:03:40.847 killing process with pid 1266589 00:03:40.847 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1266589 00:03:40.847 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1266589 00:03:40.847 00:03:40.847 real 0m5.365s 00:03:40.847 user 0m5.117s 00:03:40.847 sys 0m0.282s 00:03:40.847 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:40.847 14:13:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.847 ************************************ 00:03:40.847 END TEST skip_rpc 00:03:40.847 ************************************ 00:03:40.847 14:13:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:40.847 14:13:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:40.847 14:13:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.847 14:13:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.847 ************************************ 00:03:40.847 START TEST skip_rpc_with_json 00:03:40.847 ************************************ 00:03:40.847 14:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:40.847 14:13:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:40.847 14:13:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1267533 00:03:40.847 14:13:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:40.847 14:13:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:40.847 14:13:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1267533 00:03:40.847 14:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1267533 ']' 00:03:40.847 14:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:40.847 14:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:40.847 14:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:40.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:40.847 14:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:40.847 14:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:40.847 [2024-11-17 14:13:29.876111] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:03:40.847 [2024-11-17 14:13:29.876153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1267533 ] 00:03:40.847 [2024-11-17 14:13:29.952648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:40.847 [2024-11-17 14:13:29.995742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.105 14:13:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:41.105 14:13:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:41.105 14:13:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:41.105 14:13:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.105 14:13:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.105 [2024-11-17 14:13:30.211846] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:41.105 request: 00:03:41.105 { 00:03:41.105 "trtype": "tcp", 00:03:41.105 "method": "nvmf_get_transports", 00:03:41.106 "req_id": 1 00:03:41.106 } 00:03:41.106 Got JSON-RPC error response 00:03:41.106 response: 00:03:41.106 { 00:03:41.106 "code": -19, 00:03:41.106 "message": "No such device" 00:03:41.106 } 00:03:41.106 14:13:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:41.106 14:13:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:41.106 14:13:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.106 14:13:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.106 [2024-11-17 14:13:30.223960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:41.106 14:13:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.106 14:13:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:41.106 14:13:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.106 14:13:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.365 14:13:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.365 14:13:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:41.365 { 00:03:41.365 "subsystems": [ 00:03:41.365 { 00:03:41.365 "subsystem": "fsdev", 00:03:41.365 "config": [ 00:03:41.365 { 00:03:41.365 "method": "fsdev_set_opts", 00:03:41.365 "params": { 00:03:41.365 "fsdev_io_pool_size": 65535, 00:03:41.365 "fsdev_io_cache_size": 256 00:03:41.365 } 00:03:41.365 } 00:03:41.365 ] 00:03:41.365 }, 00:03:41.365 { 00:03:41.365 "subsystem": "vfio_user_target", 00:03:41.365 "config": null 00:03:41.365 }, 00:03:41.365 { 00:03:41.365 "subsystem": "keyring", 00:03:41.365 "config": [] 00:03:41.365 }, 00:03:41.365 { 00:03:41.365 "subsystem": "iobuf", 00:03:41.365 "config": [ 00:03:41.365 { 00:03:41.365 "method": "iobuf_set_options", 00:03:41.365 "params": { 00:03:41.365 "small_pool_count": 8192, 00:03:41.365 "large_pool_count": 1024, 00:03:41.365 "small_bufsize": 8192, 00:03:41.365 "large_bufsize": 135168, 00:03:41.365 "enable_numa": false 00:03:41.365 } 00:03:41.365 } 00:03:41.365 ] 00:03:41.365 }, 00:03:41.365 { 00:03:41.365 "subsystem": "sock", 00:03:41.365 "config": [ 00:03:41.365 { 00:03:41.365 "method": "sock_set_default_impl", 00:03:41.365 "params": { 00:03:41.365 "impl_name": "posix" 00:03:41.365 } 00:03:41.365 }, 00:03:41.366 { 00:03:41.366 "method": "sock_impl_set_options", 00:03:41.366 "params": { 00:03:41.366 "impl_name": "ssl", 00:03:41.366 "recv_buf_size": 4096, 00:03:41.366 "send_buf_size": 4096, 00:03:41.366 "enable_recv_pipe": true, 00:03:41.366 "enable_quickack": false, 00:03:41.366 "enable_placement_id": 0, 00:03:41.366 "enable_zerocopy_send_server": true, 00:03:41.366 "enable_zerocopy_send_client": false, 00:03:41.366 "zerocopy_threshold": 0, 00:03:41.366 "tls_version": 0, 00:03:41.366 "enable_ktls": false 00:03:41.366 } 00:03:41.366 }, 00:03:41.366 { 00:03:41.366 "method": "sock_impl_set_options", 00:03:41.366 "params": { 00:03:41.366 "impl_name": "posix", 00:03:41.366 "recv_buf_size": 2097152, 00:03:41.366 "send_buf_size": 2097152, 00:03:41.366 "enable_recv_pipe": true, 00:03:41.366 "enable_quickack": false, 00:03:41.366 "enable_placement_id": 0, 00:03:41.366 "enable_zerocopy_send_server": true, 00:03:41.366 "enable_zerocopy_send_client": false, 00:03:41.366 "zerocopy_threshold": 0, 00:03:41.366 "tls_version": 0, 00:03:41.366 "enable_ktls": false 00:03:41.366 } 00:03:41.366 } 00:03:41.366 ] 00:03:41.366 }, 00:03:41.366 { 00:03:41.366 "subsystem": "vmd", 00:03:41.366 "config": [] 00:03:41.366 }, 00:03:41.366 { 00:03:41.366 "subsystem": "accel", 00:03:41.366 "config": [ 00:03:41.366 { 00:03:41.366 "method": "accel_set_options", 00:03:41.366 "params": { 00:03:41.366 "small_cache_size": 128, 00:03:41.366 "large_cache_size": 16, 00:03:41.366 "task_count": 2048, 00:03:41.366 "sequence_count": 2048, 00:03:41.366 "buf_count": 2048 00:03:41.366 } 00:03:41.366 } 00:03:41.366 ] 00:03:41.366 }, 00:03:41.366 { 00:03:41.366 "subsystem": "bdev", 00:03:41.366 "config": [ 00:03:41.366 { 00:03:41.366 "method": "bdev_set_options", 00:03:41.366 "params": { 00:03:41.366 "bdev_io_pool_size": 65535, 00:03:41.366 "bdev_io_cache_size": 256, 00:03:41.366 "bdev_auto_examine": true, 00:03:41.366 "iobuf_small_cache_size": 128, 00:03:41.366 "iobuf_large_cache_size": 16 00:03:41.366 } 00:03:41.366 }, 00:03:41.366 { 00:03:41.366 "method": "bdev_raid_set_options", 00:03:41.366 "params": { 00:03:41.366 "process_window_size_kb": 1024, 00:03:41.366 "process_max_bandwidth_mb_sec": 0 00:03:41.366 } 00:03:41.366 }, 00:03:41.366 { 00:03:41.366 "method": "bdev_iscsi_set_options", 00:03:41.366 "params": { 00:03:41.366 "timeout_sec": 30 00:03:41.366 } 00:03:41.366 }, 00:03:41.366 { 00:03:41.366 "method": "bdev_nvme_set_options", 00:03:41.366 "params": { 00:03:41.366 "action_on_timeout": "none", 00:03:41.366 "timeout_us": 0, 00:03:41.366 "timeout_admin_us": 0, 00:03:41.366 "keep_alive_timeout_ms": 10000, 00:03:41.366 "arbitration_burst": 0, 00:03:41.366 "low_priority_weight": 0, 00:03:41.366 "medium_priority_weight": 0, 00:03:41.366 "high_priority_weight": 0, 00:03:41.366 "nvme_adminq_poll_period_us": 10000, 00:03:41.366 "nvme_ioq_poll_period_us": 0, 00:03:41.366 "io_queue_requests": 0, 00:03:41.366 "delay_cmd_submit": true, 00:03:41.366 "transport_retry_count": 4, 00:03:41.366 "bdev_retry_count": 3, 00:03:41.366 "transport_ack_timeout": 0, 00:03:41.366 "ctrlr_loss_timeout_sec": 0, 00:03:41.366 "reconnect_delay_sec": 0, 00:03:41.366 "fast_io_fail_timeout_sec": 0, 00:03:41.366 "disable_auto_failback": false, 00:03:41.366 "generate_uuids": false, 00:03:41.366 "transport_tos": 0, 00:03:41.366 "nvme_error_stat": false, 00:03:41.366 "rdma_srq_size": 0, 00:03:41.366 "io_path_stat": false, 00:03:41.366 "allow_accel_sequence": false, 00:03:41.366 "rdma_max_cq_size": 0, 00:03:41.366 "rdma_cm_event_timeout_ms": 0, 00:03:41.366 "dhchap_digests": [ 00:03:41.366 "sha256", 00:03:41.366 "sha384", 00:03:41.366 "sha512" 00:03:41.366 ], 00:03:41.366 "dhchap_dhgroups": [ 00:03:41.366 "null", 00:03:41.366 "ffdhe2048", 00:03:41.366 "ffdhe3072", 00:03:41.366 "ffdhe4096", 00:03:41.366 "ffdhe6144", 00:03:41.366 "ffdhe8192" 00:03:41.366 ] 00:03:41.366 } 00:03:41.366 }, 00:03:41.366 { 00:03:41.366 "method": "bdev_nvme_set_hotplug", 00:03:41.366 "params": { 00:03:41.366 "period_us": 100000, 00:03:41.366 "enable": false 00:03:41.366 } 00:03:41.366 }, 00:03:41.366 { 00:03:41.366 "method": "bdev_wait_for_examine" 00:03:41.366 } 00:03:41.366 ] 00:03:41.366 }, 00:03:41.366 { 00:03:41.366 "subsystem": "scsi", 00:03:41.366 "config": null 00:03:41.366 }, 00:03:41.366 { 00:03:41.366 "subsystem": "scheduler", 00:03:41.366 "config": [ 00:03:41.366 { 00:03:41.366 "method": "framework_set_scheduler", 00:03:41.366 "params": { 00:03:41.366 "name": "static" 00:03:41.366 } 00:03:41.366 } 00:03:41.366 ] 00:03:41.366 }, 00:03:41.366 { 00:03:41.366 "subsystem": "vhost_scsi", 00:03:41.366 "config": [] 00:03:41.366 }, 00:03:41.366 { 00:03:41.366 "subsystem": "vhost_blk", 00:03:41.366 "config": [] 00:03:41.366 }, 00:03:41.366 { 00:03:41.366 "subsystem": "ublk", 00:03:41.366 "config": [] 00:03:41.366 }, 00:03:41.366 { 00:03:41.366 "subsystem": "nbd", 00:03:41.366 "config": [] 00:03:41.366 }, 00:03:41.366 { 00:03:41.366 "subsystem": "nvmf", 00:03:41.366 "config": [ 00:03:41.366 { 00:03:41.366 "method": "nvmf_set_config", 00:03:41.366 "params": { 00:03:41.366 "discovery_filter": "match_any", 00:03:41.366 "admin_cmd_passthru": { 00:03:41.366 "identify_ctrlr": false 00:03:41.366 }, 00:03:41.366 "dhchap_digests": [ 00:03:41.366 "sha256", 00:03:41.366 "sha384", 00:03:41.366 "sha512" 00:03:41.366 ], 00:03:41.366 "dhchap_dhgroups": [ 00:03:41.366 "null", 00:03:41.366 "ffdhe2048", 00:03:41.366 "ffdhe3072", 00:03:41.366 "ffdhe4096", 00:03:41.366 "ffdhe6144", 00:03:41.366 "ffdhe8192" 00:03:41.366 ] 00:03:41.366 } 00:03:41.366 }, 00:03:41.366 { 00:03:41.366 "method": "nvmf_set_max_subsystems", 00:03:41.366 "params": { 00:03:41.366 "max_subsystems": 1024 00:03:41.366 } 00:03:41.366 }, 00:03:41.366 { 00:03:41.366 "method": "nvmf_set_crdt", 00:03:41.366 "params": { 00:03:41.366 "crdt1": 0, 00:03:41.366 "crdt2": 0, 00:03:41.366 "crdt3": 0 00:03:41.366 } 00:03:41.366 }, 00:03:41.366 { 00:03:41.366 "method": "nvmf_create_transport", 00:03:41.366 "params": { 00:03:41.366 "trtype": "TCP", 00:03:41.366 "max_queue_depth": 128, 00:03:41.366 "max_io_qpairs_per_ctrlr": 127, 00:03:41.366 "in_capsule_data_size": 4096, 00:03:41.366 "max_io_size": 131072, 00:03:41.366 "io_unit_size": 131072, 00:03:41.366 "max_aq_depth": 128, 00:03:41.366 "num_shared_buffers": 511, 00:03:41.366 "buf_cache_size": 4294967295, 00:03:41.366 "dif_insert_or_strip": false, 00:03:41.366 "zcopy": false, 00:03:41.366 "c2h_success": true, 00:03:41.366 "sock_priority": 0, 00:03:41.366 "abort_timeout_sec": 1, 00:03:41.366 "ack_timeout": 0, 00:03:41.366 "data_wr_pool_size": 0 00:03:41.366 } 00:03:41.366 } 00:03:41.366 ] 00:03:41.366 }, 00:03:41.366 { 00:03:41.366 "subsystem": "iscsi", 00:03:41.366 "config": [ 00:03:41.366 { 00:03:41.366 "method": "iscsi_set_options", 00:03:41.366 "params": { 00:03:41.366 "node_base": "iqn.2016-06.io.spdk", 00:03:41.366 "max_sessions": 128, 00:03:41.366 "max_connections_per_session": 2, 00:03:41.366 "max_queue_depth": 64, 00:03:41.366 "default_time2wait": 2, 00:03:41.366 "default_time2retain": 20, 00:03:41.366 "first_burst_length": 8192, 00:03:41.366 "immediate_data": true, 00:03:41.366 "allow_duplicated_isid": false, 00:03:41.366 "error_recovery_level": 0, 00:03:41.366 "nop_timeout": 60, 00:03:41.366 "nop_in_interval": 30, 00:03:41.366 "disable_chap": false, 00:03:41.366 "require_chap": false, 00:03:41.366 "mutual_chap": false, 00:03:41.366 "chap_group": 0, 00:03:41.366 "max_large_datain_per_connection": 64, 00:03:41.366 "max_r2t_per_connection": 4, 00:03:41.366 "pdu_pool_size": 36864, 00:03:41.366 "immediate_data_pool_size": 16384, 00:03:41.366 "data_out_pool_size": 2048 00:03:41.366 } 00:03:41.366 } 00:03:41.366 ] 00:03:41.366 } 00:03:41.366 ] 00:03:41.366 } 00:03:41.366 14:13:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:41.366 14:13:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1267533 00:03:41.366 14:13:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1267533 ']' 00:03:41.366 14:13:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1267533 00:03:41.366 14:13:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:41.366 14:13:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:41.366 14:13:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1267533 00:03:41.366 14:13:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:41.366 14:13:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:41.367 14:13:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1267533' 00:03:41.367 killing process with pid 1267533 00:03:41.367 14:13:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1267533 00:03:41.367 14:13:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1267533 00:03:41.626 14:13:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1267558 00:03:41.626 14:13:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:41.626 14:13:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:46.899 14:13:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1267558 00:03:46.899 14:13:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1267558 ']' 00:03:46.899 14:13:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1267558 00:03:46.899 14:13:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:46.899 14:13:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:46.899 14:13:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1267558 00:03:46.899 14:13:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:46.899 14:13:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:46.899 14:13:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1267558' 00:03:46.899 killing process with pid 1267558 00:03:46.899 14:13:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1267558 00:03:46.899 14:13:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1267558 00:03:46.899 14:13:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:46.899 14:13:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:46.899 00:03:46.899 real 0m6.288s 00:03:46.899 user 0m5.991s 00:03:46.899 sys 0m0.592s 00:03:46.899 14:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.899 14:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:46.899 ************************************ 00:03:46.899 END TEST skip_rpc_with_json 00:03:46.899 ************************************ 00:03:47.159 14:13:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:47.159 14:13:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.159 14:13:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.159 14:13:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.159 ************************************ 00:03:47.159 START TEST skip_rpc_with_delay 00:03:47.159 ************************************ 00:03:47.159 14:13:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:47.159 14:13:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:47.159 14:13:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:47.159 14:13:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:47.159 14:13:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.159 14:13:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:47.159 14:13:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.159 14:13:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:47.159 14:13:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.159 14:13:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:47.160 14:13:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.160 14:13:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:47.160 14:13:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:47.160 [2024-11-17 14:13:36.239457] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:47.160 14:13:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:47.160 14:13:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:47.160 14:13:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:47.160 14:13:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:47.160 00:03:47.160 real 0m0.070s 00:03:47.160 user 0m0.038s 00:03:47.160 sys 0m0.031s 00:03:47.160 14:13:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.160 14:13:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:47.160 ************************************ 00:03:47.160 END TEST skip_rpc_with_delay 00:03:47.160 ************************************ 00:03:47.160 14:13:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:47.160 14:13:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:47.160 14:13:36 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:47.160 14:13:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.160 14:13:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.160 14:13:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.160 ************************************ 00:03:47.160 START TEST exit_on_failed_rpc_init 00:03:47.160 ************************************ 00:03:47.160 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:47.160 14:13:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1268541 00:03:47.160 14:13:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1268541 00:03:47.160 14:13:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:47.160 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1268541 ']' 00:03:47.160 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:47.160 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:47.160 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:47.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:47.160 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:47.160 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:47.160 [2024-11-17 14:13:36.379681] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:03:47.160 [2024-11-17 14:13:36.379727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1268541 ] 00:03:47.419 [2024-11-17 14:13:36.454418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.419 [2024-11-17 14:13:36.498319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.677 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:47.678 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:47.678 14:13:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:47.678 14:13:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:47.678 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:47.678 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:47.678 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.678 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:47.678 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.678 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:47.678 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.678 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:47.678 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.678 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:47.678 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:47.678 [2024-11-17 14:13:36.765311] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:03:47.678 [2024-11-17 14:13:36.765359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1268755 ] 00:03:47.678 [2024-11-17 14:13:36.837971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.678 [2024-11-17 14:13:36.878658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:47.678 [2024-11-17 14:13:36.878711] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:47.678 [2024-11-17 14:13:36.878720] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:47.678 [2024-11-17 14:13:36.878729] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:47.936 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:47.936 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:47.936 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:47.936 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:47.936 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:47.936 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:47.936 14:13:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:47.936 14:13:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1268541 00:03:47.936 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1268541 ']' 00:03:47.936 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1268541 00:03:47.936 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:47.936 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:47.936 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1268541 00:03:47.936 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:47.936 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:47.936 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1268541' 00:03:47.936 killing process with pid 1268541 00:03:47.936 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1268541 00:03:47.936 14:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1268541 00:03:48.195 00:03:48.196 real 0m0.946s 00:03:48.196 user 0m1.013s 00:03:48.196 sys 0m0.373s 00:03:48.196 14:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.196 14:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:48.196 ************************************ 00:03:48.196 END TEST exit_on_failed_rpc_init 00:03:48.196 ************************************ 00:03:48.196 14:13:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:48.196 00:03:48.196 real 0m13.126s 00:03:48.196 user 0m12.368s 00:03:48.196 sys 0m1.558s 00:03:48.196 14:13:37 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.196 14:13:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.196 ************************************ 00:03:48.196 END TEST skip_rpc 00:03:48.196 ************************************ 00:03:48.196 14:13:37 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:48.196 14:13:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.196 14:13:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.196 14:13:37 -- common/autotest_common.sh@10 -- # set +x 00:03:48.196 ************************************ 00:03:48.196 START TEST rpc_client 00:03:48.196 ************************************ 00:03:48.196 14:13:37 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:48.455 * Looking for test storage... 00:03:48.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:48.455 14:13:37 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:48.455 14:13:37 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:03:48.455 14:13:37 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:48.455 14:13:37 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:48.455 14:13:37 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:48.455 14:13:37 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.455 14:13:37 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:48.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.455 --rc genhtml_branch_coverage=1 00:03:48.455 --rc genhtml_function_coverage=1 00:03:48.455 --rc genhtml_legend=1 00:03:48.455 --rc geninfo_all_blocks=1 00:03:48.455 --rc geninfo_unexecuted_blocks=1 00:03:48.455 00:03:48.455 ' 00:03:48.455 14:13:37 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:48.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.455 --rc genhtml_branch_coverage=1 00:03:48.455 --rc genhtml_function_coverage=1 00:03:48.455 --rc genhtml_legend=1 00:03:48.455 --rc geninfo_all_blocks=1 00:03:48.455 --rc geninfo_unexecuted_blocks=1 00:03:48.455 00:03:48.455 ' 00:03:48.455 14:13:37 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:48.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.455 --rc genhtml_branch_coverage=1 00:03:48.455 --rc genhtml_function_coverage=1 00:03:48.455 --rc genhtml_legend=1 00:03:48.455 --rc geninfo_all_blocks=1 00:03:48.455 --rc geninfo_unexecuted_blocks=1 00:03:48.455 00:03:48.455 ' 00:03:48.455 14:13:37 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:48.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.455 --rc genhtml_branch_coverage=1 00:03:48.455 --rc genhtml_function_coverage=1 00:03:48.455 --rc genhtml_legend=1 00:03:48.455 --rc geninfo_all_blocks=1 00:03:48.455 --rc geninfo_unexecuted_blocks=1 00:03:48.455 00:03:48.455 ' 00:03:48.455 14:13:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:48.455 OK 00:03:48.455 14:13:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:48.455 00:03:48.455 real 0m0.201s 00:03:48.455 user 0m0.120s 00:03:48.455 sys 0m0.095s 00:03:48.455 14:13:37 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.455 14:13:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:48.455 ************************************ 00:03:48.455 END TEST rpc_client 00:03:48.455 ************************************ 00:03:48.455 14:13:37 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:48.455 14:13:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.455 14:13:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.455 14:13:37 -- common/autotest_common.sh@10 -- # set +x 00:03:48.455 ************************************ 00:03:48.455 START TEST json_config 00:03:48.455 ************************************ 00:03:48.455 14:13:37 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:48.715 14:13:37 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:48.715 14:13:37 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:03:48.715 14:13:37 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:48.715 14:13:37 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:48.715 14:13:37 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:48.715 14:13:37 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:48.715 14:13:37 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:48.715 14:13:37 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.715 14:13:37 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:48.715 14:13:37 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:48.715 14:13:37 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:48.715 14:13:37 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:48.715 14:13:37 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:48.716 14:13:37 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:48.716 14:13:37 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:48.716 14:13:37 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:48.716 14:13:37 json_config -- scripts/common.sh@345 -- # : 1 00:03:48.716 14:13:37 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:48.716 14:13:37 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.716 14:13:37 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:48.716 14:13:37 json_config -- scripts/common.sh@353 -- # local d=1 00:03:48.716 14:13:37 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.716 14:13:37 json_config -- scripts/common.sh@355 -- # echo 1 00:03:48.716 14:13:37 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:48.716 14:13:37 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:48.716 14:13:37 json_config -- scripts/common.sh@353 -- # local d=2 00:03:48.716 14:13:37 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.716 14:13:37 json_config -- scripts/common.sh@355 -- # echo 2 00:03:48.716 14:13:37 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:48.716 14:13:37 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:48.716 14:13:37 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:48.716 14:13:37 json_config -- scripts/common.sh@368 -- # return 0 00:03:48.716 14:13:37 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.716 14:13:37 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:48.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.716 --rc genhtml_branch_coverage=1 00:03:48.716 --rc genhtml_function_coverage=1 00:03:48.716 --rc genhtml_legend=1 00:03:48.716 --rc geninfo_all_blocks=1 00:03:48.716 --rc geninfo_unexecuted_blocks=1 00:03:48.716 00:03:48.716 ' 00:03:48.716 14:13:37 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:48.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.716 --rc genhtml_branch_coverage=1 00:03:48.716 --rc genhtml_function_coverage=1 00:03:48.716 --rc genhtml_legend=1 00:03:48.716 --rc geninfo_all_blocks=1 00:03:48.716 --rc geninfo_unexecuted_blocks=1 00:03:48.716 00:03:48.716 ' 00:03:48.716 14:13:37 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:48.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.716 --rc genhtml_branch_coverage=1 00:03:48.716 --rc genhtml_function_coverage=1 00:03:48.716 --rc genhtml_legend=1 00:03:48.716 --rc geninfo_all_blocks=1 00:03:48.716 --rc geninfo_unexecuted_blocks=1 00:03:48.716 00:03:48.716 ' 00:03:48.716 14:13:37 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:48.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.716 --rc genhtml_branch_coverage=1 00:03:48.716 --rc genhtml_function_coverage=1 00:03:48.716 --rc genhtml_legend=1 00:03:48.716 --rc geninfo_all_blocks=1 00:03:48.716 --rc geninfo_unexecuted_blocks=1 00:03:48.716 00:03:48.716 ' 00:03:48.716 14:13:37 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:48.716 14:13:37 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:48.716 14:13:37 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:48.716 14:13:37 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:48.716 14:13:37 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:48.716 14:13:37 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.716 14:13:37 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.716 14:13:37 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.716 14:13:37 json_config -- paths/export.sh@5 -- # export PATH 00:03:48.716 14:13:37 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@51 -- # : 0 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:48.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:48.716 14:13:37 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:48.716 14:13:37 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:48.716 14:13:37 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:48.716 14:13:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:48.716 14:13:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:48.717 14:13:37 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:48.717 14:13:37 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:48.717 14:13:37 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:48.717 14:13:37 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:48.717 14:13:37 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:48.717 14:13:37 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:48.717 14:13:37 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:48.717 14:13:37 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:48.717 14:13:37 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:48.717 14:13:37 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:48.717 14:13:37 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:48.717 14:13:37 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:48.717 INFO: JSON configuration test init 00:03:48.717 14:13:37 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:48.717 14:13:37 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:48.717 14:13:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:48.717 14:13:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.717 14:13:37 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:48.717 14:13:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:48.717 14:13:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.717 14:13:37 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:48.717 14:13:37 json_config -- json_config/common.sh@9 -- # local app=target 00:03:48.717 14:13:37 json_config -- json_config/common.sh@10 -- # shift 00:03:48.717 14:13:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:48.717 14:13:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:48.717 14:13:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:48.717 14:13:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:48.717 14:13:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:48.717 14:13:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1268988 00:03:48.717 14:13:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:48.717 Waiting for target to run... 00:03:48.717 14:13:37 json_config -- json_config/common.sh@25 -- # waitforlisten 1268988 /var/tmp/spdk_tgt.sock 00:03:48.717 14:13:37 json_config -- common/autotest_common.sh@835 -- # '[' -z 1268988 ']' 00:03:48.717 14:13:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:48.717 14:13:37 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:48.717 14:13:37 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:48.717 14:13:37 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:48.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:48.717 14:13:37 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:48.717 14:13:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.717 [2024-11-17 14:13:37.899697] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:03:48.717 [2024-11-17 14:13:37.899748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1268988 ] 00:03:49.285 [2024-11-17 14:13:38.354615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.285 [2024-11-17 14:13:38.412959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.543 14:13:38 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:49.543 14:13:38 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:49.543 14:13:38 json_config -- json_config/common.sh@26 -- # echo '' 00:03:49.543 00:03:49.543 14:13:38 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:49.543 14:13:38 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:49.543 14:13:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:49.543 14:13:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.543 14:13:38 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:49.543 14:13:38 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:49.543 14:13:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:49.543 14:13:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.806 14:13:38 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:49.806 14:13:38 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:49.806 14:13:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:53.288 14:13:41 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:53.288 14:13:41 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:53.288 14:13:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.288 14:13:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.288 14:13:41 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:53.288 14:13:41 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:53.288 14:13:41 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:53.289 14:13:41 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:53.289 14:13:41 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:53.289 14:13:41 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:53.289 14:13:41 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:53.289 14:13:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@54 -- # sort 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:53.289 14:13:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:53.289 14:13:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:53.289 14:13:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.289 14:13:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:53.289 14:13:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:53.289 MallocForNvmf0 00:03:53.289 14:13:42 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:53.289 14:13:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:53.289 MallocForNvmf1 00:03:53.547 14:13:42 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:53.547 14:13:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:53.547 [2024-11-17 14:13:42.683373] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:53.547 14:13:42 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:53.547 14:13:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:53.806 14:13:42 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:53.806 14:13:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:54.064 14:13:43 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:54.064 14:13:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:54.064 14:13:43 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:54.064 14:13:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:54.323 [2024-11-17 14:13:43.421703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:54.323 14:13:43 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:54.323 14:13:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:54.323 14:13:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.323 14:13:43 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:54.324 14:13:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:54.324 14:13:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.324 14:13:43 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:54.324 14:13:43 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:54.324 14:13:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:54.582 MallocBdevForConfigChangeCheck 00:03:54.582 14:13:43 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:54.582 14:13:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:54.582 14:13:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.582 14:13:43 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:54.583 14:13:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:54.841 14:13:44 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:54.841 INFO: shutting down applications... 00:03:54.841 14:13:44 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:54.841 14:13:44 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:54.841 14:13:44 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:54.841 14:13:44 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:56.741 Calling clear_iscsi_subsystem 00:03:56.741 Calling clear_nvmf_subsystem 00:03:56.741 Calling clear_nbd_subsystem 00:03:56.741 Calling clear_ublk_subsystem 00:03:56.741 Calling clear_vhost_blk_subsystem 00:03:56.741 Calling clear_vhost_scsi_subsystem 00:03:56.741 Calling clear_bdev_subsystem 00:03:56.741 14:13:45 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:56.741 14:13:45 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:56.741 14:13:45 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:56.741 14:13:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.742 14:13:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:56.742 14:13:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:57.000 14:13:46 json_config -- json_config/json_config.sh@352 -- # break 00:03:57.000 14:13:46 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:57.000 14:13:46 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:57.000 14:13:46 json_config -- json_config/common.sh@31 -- # local app=target 00:03:57.000 14:13:46 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:57.000 14:13:46 json_config -- json_config/common.sh@35 -- # [[ -n 1268988 ]] 00:03:57.000 14:13:46 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1268988 00:03:57.000 14:13:46 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:57.000 14:13:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:57.000 14:13:46 json_config -- json_config/common.sh@41 -- # kill -0 1268988 00:03:57.000 14:13:46 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:57.569 14:13:46 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:57.569 14:13:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:57.569 14:13:46 json_config -- json_config/common.sh@41 -- # kill -0 1268988 00:03:57.569 14:13:46 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:57.569 14:13:46 json_config -- json_config/common.sh@43 -- # break 00:03:57.570 14:13:46 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:57.570 14:13:46 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:57.570 SPDK target shutdown done 00:03:57.570 14:13:46 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:57.570 INFO: relaunching applications... 00:03:57.570 14:13:46 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:57.570 14:13:46 json_config -- json_config/common.sh@9 -- # local app=target 00:03:57.570 14:13:46 json_config -- json_config/common.sh@10 -- # shift 00:03:57.570 14:13:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:57.570 14:13:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:57.570 14:13:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:57.570 14:13:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:57.570 14:13:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:57.570 14:13:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1270632 00:03:57.570 14:13:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:57.570 Waiting for target to run... 00:03:57.570 14:13:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:57.570 14:13:46 json_config -- json_config/common.sh@25 -- # waitforlisten 1270632 /var/tmp/spdk_tgt.sock 00:03:57.570 14:13:46 json_config -- common/autotest_common.sh@835 -- # '[' -z 1270632 ']' 00:03:57.570 14:13:46 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:57.570 14:13:46 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:57.570 14:13:46 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:57.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:57.570 14:13:46 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:57.570 14:13:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.570 [2024-11-17 14:13:46.582762] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:03:57.570 [2024-11-17 14:13:46.582821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1270632 ] 00:03:57.828 [2024-11-17 14:13:47.043072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.087 [2024-11-17 14:13:47.095146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.376 [2024-11-17 14:13:50.125332] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:01.376 [2024-11-17 14:13:50.157676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:01.635 14:13:50 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:01.635 14:13:50 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:01.635 14:13:50 json_config -- json_config/common.sh@26 -- # echo '' 00:04:01.635 00:04:01.635 14:13:50 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:01.635 14:13:50 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:01.635 INFO: Checking if target configuration is the same... 00:04:01.635 14:13:50 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:01.635 14:13:50 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:01.635 14:13:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:01.635 + '[' 2 -ne 2 ']' 00:04:01.635 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:01.635 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:01.635 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:01.635 +++ basename /dev/fd/62 00:04:01.635 ++ mktemp /tmp/62.XXX 00:04:01.635 + tmp_file_1=/tmp/62.AOZ 00:04:01.635 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:01.635 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:01.893 + tmp_file_2=/tmp/spdk_tgt_config.json.av7 00:04:01.893 + ret=0 00:04:01.893 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:02.151 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:02.151 + diff -u /tmp/62.AOZ /tmp/spdk_tgt_config.json.av7 00:04:02.151 + echo 'INFO: JSON config files are the same' 00:04:02.151 INFO: JSON config files are the same 00:04:02.151 + rm /tmp/62.AOZ /tmp/spdk_tgt_config.json.av7 00:04:02.151 + exit 0 00:04:02.151 14:13:51 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:02.151 14:13:51 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:02.151 INFO: changing configuration and checking if this can be detected... 00:04:02.151 14:13:51 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:02.151 14:13:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:02.411 14:13:51 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:02.411 14:13:51 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:02.411 14:13:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:02.411 + '[' 2 -ne 2 ']' 00:04:02.411 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:02.411 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:02.411 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:02.411 +++ basename /dev/fd/62 00:04:02.411 ++ mktemp /tmp/62.XXX 00:04:02.411 + tmp_file_1=/tmp/62.Wqz 00:04:02.411 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:02.411 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:02.411 + tmp_file_2=/tmp/spdk_tgt_config.json.o22 00:04:02.411 + ret=0 00:04:02.411 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:02.670 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:02.670 + diff -u /tmp/62.Wqz /tmp/spdk_tgt_config.json.o22 00:04:02.670 + ret=1 00:04:02.670 + echo '=== Start of file: /tmp/62.Wqz ===' 00:04:02.670 + cat /tmp/62.Wqz 00:04:02.670 + echo '=== End of file: /tmp/62.Wqz ===' 00:04:02.670 + echo '' 00:04:02.670 + echo '=== Start of file: /tmp/spdk_tgt_config.json.o22 ===' 00:04:02.670 + cat /tmp/spdk_tgt_config.json.o22 00:04:02.670 + echo '=== End of file: /tmp/spdk_tgt_config.json.o22 ===' 00:04:02.670 + echo '' 00:04:02.670 + rm /tmp/62.Wqz /tmp/spdk_tgt_config.json.o22 00:04:02.670 + exit 1 00:04:02.670 14:13:51 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:02.670 INFO: configuration change detected. 00:04:02.670 14:13:51 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:02.670 14:13:51 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:02.670 14:13:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.670 14:13:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.670 14:13:51 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:02.670 14:13:51 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:02.670 14:13:51 json_config -- json_config/json_config.sh@324 -- # [[ -n 1270632 ]] 00:04:02.670 14:13:51 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:02.670 14:13:51 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:02.670 14:13:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.670 14:13:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.670 14:13:51 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:02.670 14:13:51 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:02.670 14:13:51 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:02.670 14:13:51 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:02.670 14:13:51 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:02.670 14:13:51 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:02.670 14:13:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:02.670 14:13:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.929 14:13:51 json_config -- json_config/json_config.sh@330 -- # killprocess 1270632 00:04:02.929 14:13:51 json_config -- common/autotest_common.sh@954 -- # '[' -z 1270632 ']' 00:04:02.929 14:13:51 json_config -- common/autotest_common.sh@958 -- # kill -0 1270632 00:04:02.929 14:13:51 json_config -- common/autotest_common.sh@959 -- # uname 00:04:02.929 14:13:51 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:02.929 14:13:51 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1270632 00:04:02.929 14:13:51 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:02.929 14:13:51 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:02.929 14:13:51 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1270632' 00:04:02.929 killing process with pid 1270632 00:04:02.929 14:13:51 json_config -- common/autotest_common.sh@973 -- # kill 1270632 00:04:02.929 14:13:51 json_config -- common/autotest_common.sh@978 -- # wait 1270632 00:04:04.309 14:13:53 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:04.309 14:13:53 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:04.309 14:13:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:04.309 14:13:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.309 14:13:53 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:04.309 14:13:53 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:04.309 INFO: Success 00:04:04.309 00:04:04.309 real 0m15.818s 00:04:04.309 user 0m16.239s 00:04:04.309 sys 0m2.775s 00:04:04.309 14:13:53 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.309 14:13:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.309 ************************************ 00:04:04.309 END TEST json_config 00:04:04.309 ************************************ 00:04:04.309 14:13:53 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:04.309 14:13:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.309 14:13:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.309 14:13:53 -- common/autotest_common.sh@10 -- # set +x 00:04:04.568 ************************************ 00:04:04.568 START TEST json_config_extra_key 00:04:04.568 ************************************ 00:04:04.568 14:13:53 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:04.568 14:13:53 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:04.568 14:13:53 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:04.568 14:13:53 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:04.568 14:13:53 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.568 14:13:53 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.569 14:13:53 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.569 14:13:53 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:04.569 14:13:53 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.569 14:13:53 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:04.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.569 --rc genhtml_branch_coverage=1 00:04:04.569 --rc genhtml_function_coverage=1 00:04:04.569 --rc genhtml_legend=1 00:04:04.569 --rc geninfo_all_blocks=1 00:04:04.569 --rc geninfo_unexecuted_blocks=1 00:04:04.569 00:04:04.569 ' 00:04:04.569 14:13:53 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:04.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.569 --rc genhtml_branch_coverage=1 00:04:04.569 --rc genhtml_function_coverage=1 00:04:04.569 --rc genhtml_legend=1 00:04:04.569 --rc geninfo_all_blocks=1 00:04:04.569 --rc geninfo_unexecuted_blocks=1 00:04:04.569 00:04:04.569 ' 00:04:04.569 14:13:53 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:04.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.569 --rc genhtml_branch_coverage=1 00:04:04.569 --rc genhtml_function_coverage=1 00:04:04.569 --rc genhtml_legend=1 00:04:04.569 --rc geninfo_all_blocks=1 00:04:04.569 --rc geninfo_unexecuted_blocks=1 00:04:04.569 00:04:04.569 ' 00:04:04.569 14:13:53 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:04.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.569 --rc genhtml_branch_coverage=1 00:04:04.569 --rc genhtml_function_coverage=1 00:04:04.569 --rc genhtml_legend=1 00:04:04.569 --rc geninfo_all_blocks=1 00:04:04.569 --rc geninfo_unexecuted_blocks=1 00:04:04.569 00:04:04.569 ' 00:04:04.569 14:13:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:04.569 14:13:53 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:04.569 14:13:53 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:04.569 14:13:53 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:04.569 14:13:53 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:04.569 14:13:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.569 14:13:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.569 14:13:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.569 14:13:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:04.569 14:13:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:04.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:04.569 14:13:53 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:04.569 14:13:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:04.569 14:13:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:04.569 14:13:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:04.569 14:13:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:04.569 14:13:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:04.569 14:13:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:04.569 14:13:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:04.569 14:13:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:04.569 14:13:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:04.569 14:13:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:04.569 14:13:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:04.569 INFO: launching applications... 00:04:04.569 14:13:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:04.569 14:13:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:04.569 14:13:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:04.569 14:13:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:04.569 14:13:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:04.569 14:13:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:04.569 14:13:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.569 14:13:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.569 14:13:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1271909 00:04:04.569 14:13:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:04.569 Waiting for target to run... 00:04:04.569 14:13:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1271909 /var/tmp/spdk_tgt.sock 00:04:04.569 14:13:53 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1271909 ']' 00:04:04.569 14:13:53 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:04.569 14:13:53 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:04.569 14:13:53 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:04.569 14:13:53 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:04.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:04.569 14:13:53 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:04.569 14:13:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:04.569 [2024-11-17 14:13:53.771625] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:04.569 [2024-11-17 14:13:53.771672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1271909 ] 00:04:05.137 [2024-11-17 14:13:54.056205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.137 [2024-11-17 14:13:54.090734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.396 14:13:54 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:05.396 14:13:54 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:05.396 14:13:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:05.396 00:04:05.396 14:13:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:05.396 INFO: shutting down applications... 00:04:05.396 14:13:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:05.396 14:13:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:05.396 14:13:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:05.396 14:13:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1271909 ]] 00:04:05.397 14:13:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1271909 00:04:05.397 14:13:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:05.397 14:13:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:05.397 14:13:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1271909 00:04:05.397 14:13:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:05.964 14:13:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:05.964 14:13:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:05.964 14:13:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1271909 00:04:05.964 14:13:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:05.964 14:13:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:05.964 14:13:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:05.964 14:13:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:05.964 SPDK target shutdown done 00:04:05.964 14:13:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:05.964 Success 00:04:05.964 00:04:05.964 real 0m1.582s 00:04:05.964 user 0m1.372s 00:04:05.964 sys 0m0.404s 00:04:05.964 14:13:55 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.964 14:13:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:05.964 ************************************ 00:04:05.964 END TEST json_config_extra_key 00:04:05.964 ************************************ 00:04:05.964 14:13:55 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:05.964 14:13:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.964 14:13:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.964 14:13:55 -- common/autotest_common.sh@10 -- # set +x 00:04:05.964 ************************************ 00:04:05.964 START TEST alias_rpc 00:04:05.964 ************************************ 00:04:05.964 14:13:55 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:06.223 * Looking for test storage... 00:04:06.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:06.223 14:13:55 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:06.223 14:13:55 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:06.223 14:13:55 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:06.223 14:13:55 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.223 14:13:55 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:06.223 14:13:55 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.223 14:13:55 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:06.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.223 --rc genhtml_branch_coverage=1 00:04:06.223 --rc genhtml_function_coverage=1 00:04:06.223 --rc genhtml_legend=1 00:04:06.223 --rc geninfo_all_blocks=1 00:04:06.223 --rc geninfo_unexecuted_blocks=1 00:04:06.223 00:04:06.223 ' 00:04:06.223 14:13:55 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:06.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.223 --rc genhtml_branch_coverage=1 00:04:06.223 --rc genhtml_function_coverage=1 00:04:06.223 --rc genhtml_legend=1 00:04:06.223 --rc geninfo_all_blocks=1 00:04:06.223 --rc geninfo_unexecuted_blocks=1 00:04:06.223 00:04:06.223 ' 00:04:06.223 14:13:55 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:06.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.223 --rc genhtml_branch_coverage=1 00:04:06.223 --rc genhtml_function_coverage=1 00:04:06.223 --rc genhtml_legend=1 00:04:06.223 --rc geninfo_all_blocks=1 00:04:06.223 --rc geninfo_unexecuted_blocks=1 00:04:06.223 00:04:06.223 ' 00:04:06.223 14:13:55 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:06.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.224 --rc genhtml_branch_coverage=1 00:04:06.224 --rc genhtml_function_coverage=1 00:04:06.224 --rc genhtml_legend=1 00:04:06.224 --rc geninfo_all_blocks=1 00:04:06.224 --rc geninfo_unexecuted_blocks=1 00:04:06.224 00:04:06.224 ' 00:04:06.224 14:13:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:06.224 14:13:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1272208 00:04:06.224 14:13:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1272208 00:04:06.224 14:13:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.224 14:13:55 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1272208 ']' 00:04:06.224 14:13:55 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.224 14:13:55 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.224 14:13:55 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.224 14:13:55 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.224 14:13:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.224 [2024-11-17 14:13:55.413674] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:06.224 [2024-11-17 14:13:55.413721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1272208 ] 00:04:06.482 [2024-11-17 14:13:55.488440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.482 [2024-11-17 14:13:55.528730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.741 14:13:55 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.741 14:13:55 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:06.741 14:13:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:07.001 14:13:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1272208 00:04:07.001 14:13:55 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1272208 ']' 00:04:07.001 14:13:55 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1272208 00:04:07.001 14:13:55 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:07.001 14:13:55 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.001 14:13:55 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1272208 00:04:07.001 14:13:56 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:07.001 14:13:56 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:07.001 14:13:56 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1272208' 00:04:07.001 killing process with pid 1272208 00:04:07.001 14:13:56 alias_rpc -- common/autotest_common.sh@973 -- # kill 1272208 00:04:07.001 14:13:56 alias_rpc -- common/autotest_common.sh@978 -- # wait 1272208 00:04:07.260 00:04:07.260 real 0m1.139s 00:04:07.260 user 0m1.173s 00:04:07.261 sys 0m0.411s 00:04:07.261 14:13:56 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.261 14:13:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.261 ************************************ 00:04:07.261 END TEST alias_rpc 00:04:07.261 ************************************ 00:04:07.261 14:13:56 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:07.261 14:13:56 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:07.261 14:13:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.261 14:13:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.261 14:13:56 -- common/autotest_common.sh@10 -- # set +x 00:04:07.261 ************************************ 00:04:07.261 START TEST spdkcli_tcp 00:04:07.261 ************************************ 00:04:07.261 14:13:56 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:07.261 * Looking for test storage... 00:04:07.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:07.520 14:13:56 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:07.520 14:13:56 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:07.520 14:13:56 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:07.520 14:13:56 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.520 14:13:56 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:07.520 14:13:56 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.520 14:13:56 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:07.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.520 --rc genhtml_branch_coverage=1 00:04:07.520 --rc genhtml_function_coverage=1 00:04:07.520 --rc genhtml_legend=1 00:04:07.520 --rc geninfo_all_blocks=1 00:04:07.520 --rc geninfo_unexecuted_blocks=1 00:04:07.520 00:04:07.520 ' 00:04:07.520 14:13:56 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:07.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.520 --rc genhtml_branch_coverage=1 00:04:07.520 --rc genhtml_function_coverage=1 00:04:07.520 --rc genhtml_legend=1 00:04:07.520 --rc geninfo_all_blocks=1 00:04:07.520 --rc geninfo_unexecuted_blocks=1 00:04:07.520 00:04:07.520 ' 00:04:07.520 14:13:56 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:07.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.520 --rc genhtml_branch_coverage=1 00:04:07.520 --rc genhtml_function_coverage=1 00:04:07.520 --rc genhtml_legend=1 00:04:07.520 --rc geninfo_all_blocks=1 00:04:07.520 --rc geninfo_unexecuted_blocks=1 00:04:07.520 00:04:07.520 ' 00:04:07.520 14:13:56 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:07.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.520 --rc genhtml_branch_coverage=1 00:04:07.520 --rc genhtml_function_coverage=1 00:04:07.520 --rc genhtml_legend=1 00:04:07.520 --rc geninfo_all_blocks=1 00:04:07.520 --rc geninfo_unexecuted_blocks=1 00:04:07.520 00:04:07.520 ' 00:04:07.520 14:13:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:07.520 14:13:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:07.520 14:13:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:07.520 14:13:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:07.520 14:13:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:07.520 14:13:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:07.520 14:13:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:07.520 14:13:56 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.520 14:13:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:07.520 14:13:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:07.520 14:13:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1272495 00:04:07.520 14:13:56 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1272495 00:04:07.520 14:13:56 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1272495 ']' 00:04:07.520 14:13:56 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.520 14:13:56 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:07.520 14:13:56 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.520 14:13:56 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:07.520 14:13:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:07.520 [2024-11-17 14:13:56.618400] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:07.520 [2024-11-17 14:13:56.618446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1272495 ] 00:04:07.521 [2024-11-17 14:13:56.692960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:07.521 [2024-11-17 14:13:56.736868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:07.521 [2024-11-17 14:13:56.736869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.779 14:13:56 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:07.779 14:13:56 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:07.779 14:13:56 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1272499 00:04:07.779 14:13:56 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:07.779 14:13:56 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:08.038 [ 00:04:08.038 "bdev_malloc_delete", 00:04:08.038 "bdev_malloc_create", 00:04:08.038 "bdev_null_resize", 00:04:08.038 "bdev_null_delete", 00:04:08.038 "bdev_null_create", 00:04:08.038 "bdev_nvme_cuse_unregister", 00:04:08.038 "bdev_nvme_cuse_register", 00:04:08.038 "bdev_opal_new_user", 00:04:08.038 "bdev_opal_set_lock_state", 00:04:08.038 "bdev_opal_delete", 00:04:08.038 "bdev_opal_get_info", 00:04:08.038 "bdev_opal_create", 00:04:08.038 "bdev_nvme_opal_revert", 00:04:08.038 "bdev_nvme_opal_init", 00:04:08.038 "bdev_nvme_send_cmd", 00:04:08.038 "bdev_nvme_set_keys", 00:04:08.038 "bdev_nvme_get_path_iostat", 00:04:08.038 "bdev_nvme_get_mdns_discovery_info", 00:04:08.038 "bdev_nvme_stop_mdns_discovery", 00:04:08.038 "bdev_nvme_start_mdns_discovery", 00:04:08.038 "bdev_nvme_set_multipath_policy", 00:04:08.038 "bdev_nvme_set_preferred_path", 00:04:08.038 "bdev_nvme_get_io_paths", 00:04:08.038 "bdev_nvme_remove_error_injection", 00:04:08.038 "bdev_nvme_add_error_injection", 00:04:08.038 "bdev_nvme_get_discovery_info", 00:04:08.038 "bdev_nvme_stop_discovery", 00:04:08.038 "bdev_nvme_start_discovery", 00:04:08.038 "bdev_nvme_get_controller_health_info", 00:04:08.038 "bdev_nvme_disable_controller", 00:04:08.038 "bdev_nvme_enable_controller", 00:04:08.038 "bdev_nvme_reset_controller", 00:04:08.038 "bdev_nvme_get_transport_statistics", 00:04:08.038 "bdev_nvme_apply_firmware", 00:04:08.038 "bdev_nvme_detach_controller", 00:04:08.038 "bdev_nvme_get_controllers", 00:04:08.038 "bdev_nvme_attach_controller", 00:04:08.038 "bdev_nvme_set_hotplug", 00:04:08.038 "bdev_nvme_set_options", 00:04:08.038 "bdev_passthru_delete", 00:04:08.038 "bdev_passthru_create", 00:04:08.038 "bdev_lvol_set_parent_bdev", 00:04:08.038 "bdev_lvol_set_parent", 00:04:08.038 "bdev_lvol_check_shallow_copy", 00:04:08.038 "bdev_lvol_start_shallow_copy", 00:04:08.038 "bdev_lvol_grow_lvstore", 00:04:08.038 "bdev_lvol_get_lvols", 00:04:08.038 "bdev_lvol_get_lvstores", 00:04:08.038 "bdev_lvol_delete", 00:04:08.038 "bdev_lvol_set_read_only", 00:04:08.038 "bdev_lvol_resize", 00:04:08.038 "bdev_lvol_decouple_parent", 00:04:08.038 "bdev_lvol_inflate", 00:04:08.038 "bdev_lvol_rename", 00:04:08.038 "bdev_lvol_clone_bdev", 00:04:08.038 "bdev_lvol_clone", 00:04:08.038 "bdev_lvol_snapshot", 00:04:08.038 "bdev_lvol_create", 00:04:08.038 "bdev_lvol_delete_lvstore", 00:04:08.038 "bdev_lvol_rename_lvstore", 00:04:08.038 "bdev_lvol_create_lvstore", 00:04:08.038 "bdev_raid_set_options", 00:04:08.038 "bdev_raid_remove_base_bdev", 00:04:08.038 "bdev_raid_add_base_bdev", 00:04:08.038 "bdev_raid_delete", 00:04:08.038 "bdev_raid_create", 00:04:08.038 "bdev_raid_get_bdevs", 00:04:08.038 "bdev_error_inject_error", 00:04:08.038 "bdev_error_delete", 00:04:08.038 "bdev_error_create", 00:04:08.038 "bdev_split_delete", 00:04:08.038 "bdev_split_create", 00:04:08.038 "bdev_delay_delete", 00:04:08.038 "bdev_delay_create", 00:04:08.038 "bdev_delay_update_latency", 00:04:08.038 "bdev_zone_block_delete", 00:04:08.038 "bdev_zone_block_create", 00:04:08.038 "blobfs_create", 00:04:08.038 "blobfs_detect", 00:04:08.038 "blobfs_set_cache_size", 00:04:08.038 "bdev_aio_delete", 00:04:08.038 "bdev_aio_rescan", 00:04:08.038 "bdev_aio_create", 00:04:08.038 "bdev_ftl_set_property", 00:04:08.038 "bdev_ftl_get_properties", 00:04:08.038 "bdev_ftl_get_stats", 00:04:08.038 "bdev_ftl_unmap", 00:04:08.038 "bdev_ftl_unload", 00:04:08.038 "bdev_ftl_delete", 00:04:08.038 "bdev_ftl_load", 00:04:08.038 "bdev_ftl_create", 00:04:08.038 "bdev_virtio_attach_controller", 00:04:08.038 "bdev_virtio_scsi_get_devices", 00:04:08.038 "bdev_virtio_detach_controller", 00:04:08.038 "bdev_virtio_blk_set_hotplug", 00:04:08.038 "bdev_iscsi_delete", 00:04:08.038 "bdev_iscsi_create", 00:04:08.038 "bdev_iscsi_set_options", 00:04:08.038 "accel_error_inject_error", 00:04:08.038 "ioat_scan_accel_module", 00:04:08.038 "dsa_scan_accel_module", 00:04:08.038 "iaa_scan_accel_module", 00:04:08.038 "vfu_virtio_create_fs_endpoint", 00:04:08.038 "vfu_virtio_create_scsi_endpoint", 00:04:08.038 "vfu_virtio_scsi_remove_target", 00:04:08.038 "vfu_virtio_scsi_add_target", 00:04:08.038 "vfu_virtio_create_blk_endpoint", 00:04:08.038 "vfu_virtio_delete_endpoint", 00:04:08.038 "keyring_file_remove_key", 00:04:08.038 "keyring_file_add_key", 00:04:08.038 "keyring_linux_set_options", 00:04:08.038 "fsdev_aio_delete", 00:04:08.038 "fsdev_aio_create", 00:04:08.038 "iscsi_get_histogram", 00:04:08.038 "iscsi_enable_histogram", 00:04:08.038 "iscsi_set_options", 00:04:08.038 "iscsi_get_auth_groups", 00:04:08.038 "iscsi_auth_group_remove_secret", 00:04:08.038 "iscsi_auth_group_add_secret", 00:04:08.038 "iscsi_delete_auth_group", 00:04:08.038 "iscsi_create_auth_group", 00:04:08.038 "iscsi_set_discovery_auth", 00:04:08.038 "iscsi_get_options", 00:04:08.038 "iscsi_target_node_request_logout", 00:04:08.038 "iscsi_target_node_set_redirect", 00:04:08.038 "iscsi_target_node_set_auth", 00:04:08.038 "iscsi_target_node_add_lun", 00:04:08.038 "iscsi_get_stats", 00:04:08.038 "iscsi_get_connections", 00:04:08.038 "iscsi_portal_group_set_auth", 00:04:08.038 "iscsi_start_portal_group", 00:04:08.038 "iscsi_delete_portal_group", 00:04:08.038 "iscsi_create_portal_group", 00:04:08.038 "iscsi_get_portal_groups", 00:04:08.038 "iscsi_delete_target_node", 00:04:08.038 "iscsi_target_node_remove_pg_ig_maps", 00:04:08.038 "iscsi_target_node_add_pg_ig_maps", 00:04:08.038 "iscsi_create_target_node", 00:04:08.038 "iscsi_get_target_nodes", 00:04:08.038 "iscsi_delete_initiator_group", 00:04:08.038 "iscsi_initiator_group_remove_initiators", 00:04:08.038 "iscsi_initiator_group_add_initiators", 00:04:08.038 "iscsi_create_initiator_group", 00:04:08.038 "iscsi_get_initiator_groups", 00:04:08.038 "nvmf_set_crdt", 00:04:08.038 "nvmf_set_config", 00:04:08.038 "nvmf_set_max_subsystems", 00:04:08.038 "nvmf_stop_mdns_prr", 00:04:08.038 "nvmf_publish_mdns_prr", 00:04:08.038 "nvmf_subsystem_get_listeners", 00:04:08.038 "nvmf_subsystem_get_qpairs", 00:04:08.038 "nvmf_subsystem_get_controllers", 00:04:08.038 "nvmf_get_stats", 00:04:08.038 "nvmf_get_transports", 00:04:08.038 "nvmf_create_transport", 00:04:08.038 "nvmf_get_targets", 00:04:08.038 "nvmf_delete_target", 00:04:08.038 "nvmf_create_target", 00:04:08.038 "nvmf_subsystem_allow_any_host", 00:04:08.038 "nvmf_subsystem_set_keys", 00:04:08.038 "nvmf_subsystem_remove_host", 00:04:08.038 "nvmf_subsystem_add_host", 00:04:08.038 "nvmf_ns_remove_host", 00:04:08.038 "nvmf_ns_add_host", 00:04:08.038 "nvmf_subsystem_remove_ns", 00:04:08.038 "nvmf_subsystem_set_ns_ana_group", 00:04:08.038 "nvmf_subsystem_add_ns", 00:04:08.038 "nvmf_subsystem_listener_set_ana_state", 00:04:08.038 "nvmf_discovery_get_referrals", 00:04:08.038 "nvmf_discovery_remove_referral", 00:04:08.038 "nvmf_discovery_add_referral", 00:04:08.038 "nvmf_subsystem_remove_listener", 00:04:08.038 "nvmf_subsystem_add_listener", 00:04:08.038 "nvmf_delete_subsystem", 00:04:08.038 "nvmf_create_subsystem", 00:04:08.038 "nvmf_get_subsystems", 00:04:08.038 "env_dpdk_get_mem_stats", 00:04:08.038 "nbd_get_disks", 00:04:08.038 "nbd_stop_disk", 00:04:08.038 "nbd_start_disk", 00:04:08.038 "ublk_recover_disk", 00:04:08.038 "ublk_get_disks", 00:04:08.038 "ublk_stop_disk", 00:04:08.038 "ublk_start_disk", 00:04:08.038 "ublk_destroy_target", 00:04:08.038 "ublk_create_target", 00:04:08.038 "virtio_blk_create_transport", 00:04:08.038 "virtio_blk_get_transports", 00:04:08.038 "vhost_controller_set_coalescing", 00:04:08.038 "vhost_get_controllers", 00:04:08.038 "vhost_delete_controller", 00:04:08.038 "vhost_create_blk_controller", 00:04:08.038 "vhost_scsi_controller_remove_target", 00:04:08.038 "vhost_scsi_controller_add_target", 00:04:08.038 "vhost_start_scsi_controller", 00:04:08.038 "vhost_create_scsi_controller", 00:04:08.038 "thread_set_cpumask", 00:04:08.038 "scheduler_set_options", 00:04:08.038 "framework_get_governor", 00:04:08.038 "framework_get_scheduler", 00:04:08.039 "framework_set_scheduler", 00:04:08.039 "framework_get_reactors", 00:04:08.039 "thread_get_io_channels", 00:04:08.039 "thread_get_pollers", 00:04:08.039 "thread_get_stats", 00:04:08.039 "framework_monitor_context_switch", 00:04:08.039 "spdk_kill_instance", 00:04:08.039 "log_enable_timestamps", 00:04:08.039 "log_get_flags", 00:04:08.039 "log_clear_flag", 00:04:08.039 "log_set_flag", 00:04:08.039 "log_get_level", 00:04:08.039 "log_set_level", 00:04:08.039 "log_get_print_level", 00:04:08.039 "log_set_print_level", 00:04:08.039 "framework_enable_cpumask_locks", 00:04:08.039 "framework_disable_cpumask_locks", 00:04:08.039 "framework_wait_init", 00:04:08.039 "framework_start_init", 00:04:08.039 "scsi_get_devices", 00:04:08.039 "bdev_get_histogram", 00:04:08.039 "bdev_enable_histogram", 00:04:08.039 "bdev_set_qos_limit", 00:04:08.039 "bdev_set_qd_sampling_period", 00:04:08.039 "bdev_get_bdevs", 00:04:08.039 "bdev_reset_iostat", 00:04:08.039 "bdev_get_iostat", 00:04:08.039 "bdev_examine", 00:04:08.039 "bdev_wait_for_examine", 00:04:08.039 "bdev_set_options", 00:04:08.039 "accel_get_stats", 00:04:08.039 "accel_set_options", 00:04:08.039 "accel_set_driver", 00:04:08.039 "accel_crypto_key_destroy", 00:04:08.039 "accel_crypto_keys_get", 00:04:08.039 "accel_crypto_key_create", 00:04:08.039 "accel_assign_opc", 00:04:08.039 "accel_get_module_info", 00:04:08.039 "accel_get_opc_assignments", 00:04:08.039 "vmd_rescan", 00:04:08.039 "vmd_remove_device", 00:04:08.039 "vmd_enable", 00:04:08.039 "sock_get_default_impl", 00:04:08.039 "sock_set_default_impl", 00:04:08.039 "sock_impl_set_options", 00:04:08.039 "sock_impl_get_options", 00:04:08.039 "iobuf_get_stats", 00:04:08.039 "iobuf_set_options", 00:04:08.039 "keyring_get_keys", 00:04:08.039 "vfu_tgt_set_base_path", 00:04:08.039 "framework_get_pci_devices", 00:04:08.039 "framework_get_config", 00:04:08.039 "framework_get_subsystems", 00:04:08.039 "fsdev_set_opts", 00:04:08.039 "fsdev_get_opts", 00:04:08.039 "trace_get_info", 00:04:08.039 "trace_get_tpoint_group_mask", 00:04:08.039 "trace_disable_tpoint_group", 00:04:08.039 "trace_enable_tpoint_group", 00:04:08.039 "trace_clear_tpoint_mask", 00:04:08.039 "trace_set_tpoint_mask", 00:04:08.039 "notify_get_notifications", 00:04:08.039 "notify_get_types", 00:04:08.039 "spdk_get_version", 00:04:08.039 "rpc_get_methods" 00:04:08.039 ] 00:04:08.039 14:13:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:08.039 14:13:57 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:08.039 14:13:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:08.039 14:13:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:08.039 14:13:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1272495 00:04:08.039 14:13:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1272495 ']' 00:04:08.039 14:13:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1272495 00:04:08.039 14:13:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:08.039 14:13:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:08.039 14:13:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1272495 00:04:08.039 14:13:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:08.039 14:13:57 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:08.039 14:13:57 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1272495' 00:04:08.039 killing process with pid 1272495 00:04:08.039 14:13:57 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1272495 00:04:08.039 14:13:57 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1272495 00:04:08.606 00:04:08.606 real 0m1.142s 00:04:08.606 user 0m1.930s 00:04:08.606 sys 0m0.442s 00:04:08.606 14:13:57 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.606 14:13:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:08.606 ************************************ 00:04:08.606 END TEST spdkcli_tcp 00:04:08.606 ************************************ 00:04:08.606 14:13:57 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:08.606 14:13:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.606 14:13:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.606 14:13:57 -- common/autotest_common.sh@10 -- # set +x 00:04:08.606 ************************************ 00:04:08.606 START TEST dpdk_mem_utility 00:04:08.606 ************************************ 00:04:08.606 14:13:57 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:08.606 * Looking for test storage... 00:04:08.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:08.606 14:13:57 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:08.606 14:13:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:08.606 14:13:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:08.606 14:13:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:08.606 14:13:57 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.606 14:13:57 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.606 14:13:57 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.606 14:13:57 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.606 14:13:57 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.606 14:13:57 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.606 14:13:57 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.606 14:13:57 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.606 14:13:57 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.606 14:13:57 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.606 14:13:57 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.606 14:13:57 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:08.606 14:13:57 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:08.606 14:13:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.606 14:13:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.606 14:13:57 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:08.606 14:13:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:08.606 14:13:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.606 14:13:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:08.606 14:13:57 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.607 14:13:57 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:08.607 14:13:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:08.607 14:13:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.607 14:13:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:08.607 14:13:57 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.607 14:13:57 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.607 14:13:57 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.607 14:13:57 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:08.607 14:13:57 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.607 14:13:57 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:08.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.607 --rc genhtml_branch_coverage=1 00:04:08.607 --rc genhtml_function_coverage=1 00:04:08.607 --rc genhtml_legend=1 00:04:08.607 --rc geninfo_all_blocks=1 00:04:08.607 --rc geninfo_unexecuted_blocks=1 00:04:08.607 00:04:08.607 ' 00:04:08.607 14:13:57 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:08.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.607 --rc genhtml_branch_coverage=1 00:04:08.607 --rc genhtml_function_coverage=1 00:04:08.607 --rc genhtml_legend=1 00:04:08.607 --rc geninfo_all_blocks=1 00:04:08.607 --rc geninfo_unexecuted_blocks=1 00:04:08.607 00:04:08.607 ' 00:04:08.607 14:13:57 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:08.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.607 --rc genhtml_branch_coverage=1 00:04:08.607 --rc genhtml_function_coverage=1 00:04:08.607 --rc genhtml_legend=1 00:04:08.607 --rc geninfo_all_blocks=1 00:04:08.607 --rc geninfo_unexecuted_blocks=1 00:04:08.607 00:04:08.607 ' 00:04:08.607 14:13:57 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:08.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.607 --rc genhtml_branch_coverage=1 00:04:08.607 --rc genhtml_function_coverage=1 00:04:08.607 --rc genhtml_legend=1 00:04:08.607 --rc geninfo_all_blocks=1 00:04:08.607 --rc geninfo_unexecuted_blocks=1 00:04:08.607 00:04:08.607 ' 00:04:08.607 14:13:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:08.607 14:13:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1272799 00:04:08.607 14:13:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1272799 00:04:08.607 14:13:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.607 14:13:57 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1272799 ']' 00:04:08.607 14:13:57 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.607 14:13:57 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:08.607 14:13:57 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.607 14:13:57 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:08.607 14:13:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:08.866 [2024-11-17 14:13:57.838444] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:08.866 [2024-11-17 14:13:57.838488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1272799 ] 00:04:08.866 [2024-11-17 14:13:57.914739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.866 [2024-11-17 14:13:57.957249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.126 14:13:58 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:09.126 14:13:58 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:09.126 14:13:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:09.126 14:13:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:09.126 14:13:58 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.126 14:13:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:09.126 { 00:04:09.126 "filename": "/tmp/spdk_mem_dump.txt" 00:04:09.126 } 00:04:09.126 14:13:58 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.126 14:13:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:09.126 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:09.126 1 heaps totaling size 810.000000 MiB 00:04:09.126 size: 810.000000 MiB heap id: 0 00:04:09.126 end heaps---------- 00:04:09.126 9 mempools totaling size 595.772034 MiB 00:04:09.126 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:09.126 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:09.126 size: 92.545471 MiB name: bdev_io_1272799 00:04:09.126 size: 50.003479 MiB name: msgpool_1272799 00:04:09.126 size: 36.509338 MiB name: fsdev_io_1272799 00:04:09.126 size: 21.763794 MiB name: PDU_Pool 00:04:09.126 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:09.126 size: 4.133484 MiB name: evtpool_1272799 00:04:09.126 size: 0.026123 MiB name: Session_Pool 00:04:09.126 end mempools------- 00:04:09.126 6 memzones totaling size 4.142822 MiB 00:04:09.126 size: 1.000366 MiB name: RG_ring_0_1272799 00:04:09.126 size: 1.000366 MiB name: RG_ring_1_1272799 00:04:09.126 size: 1.000366 MiB name: RG_ring_4_1272799 00:04:09.126 size: 1.000366 MiB name: RG_ring_5_1272799 00:04:09.126 size: 0.125366 MiB name: RG_ring_2_1272799 00:04:09.126 size: 0.015991 MiB name: RG_ring_3_1272799 00:04:09.126 end memzones------- 00:04:09.126 14:13:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:09.126 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:09.126 list of free elements. size: 10.862488 MiB 00:04:09.126 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:09.126 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:09.126 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:09.126 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:09.126 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:09.126 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:09.126 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:09.126 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:09.126 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:09.126 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:09.126 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:09.126 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:09.126 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:09.126 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:09.126 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:09.126 list of standard malloc elements. size: 199.218628 MiB 00:04:09.126 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:09.126 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:09.126 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:09.126 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:09.126 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:09.126 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:09.126 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:09.126 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:09.126 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:09.126 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:09.126 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:09.126 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:09.126 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:09.126 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:09.126 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:09.126 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:09.126 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:09.126 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:09.126 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:09.126 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:09.126 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:09.126 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:09.126 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:09.126 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:09.126 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:09.126 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:09.126 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:09.126 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:09.126 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:09.126 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:09.126 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:09.126 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:09.126 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:09.126 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:09.126 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:09.126 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:09.126 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:09.126 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:09.126 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:09.126 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:09.126 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:09.126 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:09.126 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:09.126 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:09.126 list of memzone associated elements. size: 599.918884 MiB 00:04:09.126 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:09.126 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:09.126 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:09.126 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:09.126 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:09.126 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1272799_0 00:04:09.126 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:09.126 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1272799_0 00:04:09.126 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:09.126 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1272799_0 00:04:09.126 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:09.126 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:09.126 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:09.126 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:09.126 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:09.126 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1272799_0 00:04:09.126 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:09.126 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1272799 00:04:09.126 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:09.126 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1272799 00:04:09.126 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:09.126 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:09.126 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:09.127 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:09.127 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:09.127 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:09.127 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:09.127 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:09.127 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:09.127 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1272799 00:04:09.127 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:09.127 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1272799 00:04:09.127 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:09.127 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1272799 00:04:09.127 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:09.127 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1272799 00:04:09.127 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:09.127 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1272799 00:04:09.127 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:09.127 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1272799 00:04:09.127 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:09.127 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:09.127 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:09.127 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:09.127 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:09.127 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:09.127 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:09.127 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1272799 00:04:09.127 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:09.127 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1272799 00:04:09.127 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:09.127 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:09.127 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:09.127 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:09.127 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:09.127 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1272799 00:04:09.127 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:09.127 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:09.127 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:09.127 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1272799 00:04:09.127 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:09.127 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1272799 00:04:09.127 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:09.127 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1272799 00:04:09.127 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:09.127 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:09.127 14:13:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:09.127 14:13:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1272799 00:04:09.127 14:13:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1272799 ']' 00:04:09.127 14:13:58 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1272799 00:04:09.127 14:13:58 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:09.127 14:13:58 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:09.127 14:13:58 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1272799 00:04:09.127 14:13:58 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:09.127 14:13:58 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:09.127 14:13:58 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1272799' 00:04:09.127 killing process with pid 1272799 00:04:09.127 14:13:58 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1272799 00:04:09.127 14:13:58 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1272799 00:04:09.695 00:04:09.695 real 0m1.027s 00:04:09.695 user 0m0.969s 00:04:09.695 sys 0m0.409s 00:04:09.695 14:13:58 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.695 14:13:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:09.695 ************************************ 00:04:09.695 END TEST dpdk_mem_utility 00:04:09.695 ************************************ 00:04:09.695 14:13:58 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:09.695 14:13:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.695 14:13:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.695 14:13:58 -- common/autotest_common.sh@10 -- # set +x 00:04:09.695 ************************************ 00:04:09.695 START TEST event 00:04:09.695 ************************************ 00:04:09.695 14:13:58 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:09.695 * Looking for test storage... 00:04:09.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:09.695 14:13:58 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:09.695 14:13:58 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:09.695 14:13:58 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:09.695 14:13:58 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:09.695 14:13:58 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.695 14:13:58 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.695 14:13:58 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.695 14:13:58 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.695 14:13:58 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.695 14:13:58 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.695 14:13:58 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.695 14:13:58 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.695 14:13:58 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.695 14:13:58 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.695 14:13:58 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.695 14:13:58 event -- scripts/common.sh@344 -- # case "$op" in 00:04:09.695 14:13:58 event -- scripts/common.sh@345 -- # : 1 00:04:09.695 14:13:58 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.695 14:13:58 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.695 14:13:58 event -- scripts/common.sh@365 -- # decimal 1 00:04:09.695 14:13:58 event -- scripts/common.sh@353 -- # local d=1 00:04:09.695 14:13:58 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.695 14:13:58 event -- scripts/common.sh@355 -- # echo 1 00:04:09.695 14:13:58 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.695 14:13:58 event -- scripts/common.sh@366 -- # decimal 2 00:04:09.695 14:13:58 event -- scripts/common.sh@353 -- # local d=2 00:04:09.695 14:13:58 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.695 14:13:58 event -- scripts/common.sh@355 -- # echo 2 00:04:09.695 14:13:58 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.695 14:13:58 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.695 14:13:58 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.695 14:13:58 event -- scripts/common.sh@368 -- # return 0 00:04:09.695 14:13:58 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.695 14:13:58 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:09.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.695 --rc genhtml_branch_coverage=1 00:04:09.695 --rc genhtml_function_coverage=1 00:04:09.695 --rc genhtml_legend=1 00:04:09.695 --rc geninfo_all_blocks=1 00:04:09.695 --rc geninfo_unexecuted_blocks=1 00:04:09.695 00:04:09.695 ' 00:04:09.695 14:13:58 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:09.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.695 --rc genhtml_branch_coverage=1 00:04:09.695 --rc genhtml_function_coverage=1 00:04:09.695 --rc genhtml_legend=1 00:04:09.695 --rc geninfo_all_blocks=1 00:04:09.695 --rc geninfo_unexecuted_blocks=1 00:04:09.695 00:04:09.695 ' 00:04:09.695 14:13:58 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:09.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.695 --rc genhtml_branch_coverage=1 00:04:09.695 --rc genhtml_function_coverage=1 00:04:09.695 --rc genhtml_legend=1 00:04:09.696 --rc geninfo_all_blocks=1 00:04:09.696 --rc geninfo_unexecuted_blocks=1 00:04:09.696 00:04:09.696 ' 00:04:09.696 14:13:58 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:09.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.696 --rc genhtml_branch_coverage=1 00:04:09.696 --rc genhtml_function_coverage=1 00:04:09.696 --rc genhtml_legend=1 00:04:09.696 --rc geninfo_all_blocks=1 00:04:09.696 --rc geninfo_unexecuted_blocks=1 00:04:09.696 00:04:09.696 ' 00:04:09.696 14:13:58 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:09.696 14:13:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:09.696 14:13:58 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:09.696 14:13:58 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:09.696 14:13:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.696 14:13:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:09.696 ************************************ 00:04:09.696 START TEST event_perf 00:04:09.696 ************************************ 00:04:09.954 14:13:58 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:09.954 Running I/O for 1 seconds...[2024-11-17 14:13:58.939580] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:09.954 [2024-11-17 14:13:58.939648] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1273089 ] 00:04:09.954 [2024-11-17 14:13:59.020171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:09.954 [2024-11-17 14:13:59.064170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:09.954 [2024-11-17 14:13:59.064278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:09.954 [2024-11-17 14:13:59.064399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.954 [2024-11-17 14:13:59.064400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:10.889 Running I/O for 1 seconds... 00:04:10.889 lcore 0: 199453 00:04:10.889 lcore 1: 199452 00:04:10.889 lcore 2: 199452 00:04:10.889 lcore 3: 199452 00:04:10.889 done. 00:04:10.889 00:04:10.889 real 0m1.192s 00:04:10.889 user 0m4.099s 00:04:10.889 sys 0m0.088s 00:04:10.889 14:14:00 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.889 14:14:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:10.889 ************************************ 00:04:10.889 END TEST event_perf 00:04:10.889 ************************************ 00:04:11.148 14:14:00 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:11.148 14:14:00 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:11.148 14:14:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.148 14:14:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:11.148 ************************************ 00:04:11.148 START TEST event_reactor 00:04:11.148 ************************************ 00:04:11.148 14:14:00 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:11.148 [2024-11-17 14:14:00.197016] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:11.148 [2024-11-17 14:14:00.197087] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1273339 ] 00:04:11.148 [2024-11-17 14:14:00.275451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.148 [2024-11-17 14:14:00.316659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.529 test_start 00:04:12.529 oneshot 00:04:12.529 tick 100 00:04:12.529 tick 100 00:04:12.529 tick 250 00:04:12.529 tick 100 00:04:12.529 tick 100 00:04:12.529 tick 100 00:04:12.529 tick 250 00:04:12.529 tick 500 00:04:12.529 tick 100 00:04:12.529 tick 100 00:04:12.529 tick 250 00:04:12.529 tick 100 00:04:12.529 tick 100 00:04:12.529 test_end 00:04:12.529 00:04:12.529 real 0m1.178s 00:04:12.529 user 0m1.101s 00:04:12.529 sys 0m0.073s 00:04:12.529 14:14:01 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.529 14:14:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:12.529 ************************************ 00:04:12.529 END TEST event_reactor 00:04:12.529 ************************************ 00:04:12.529 14:14:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:12.529 14:14:01 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:12.529 14:14:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.529 14:14:01 event -- common/autotest_common.sh@10 -- # set +x 00:04:12.529 ************************************ 00:04:12.529 START TEST event_reactor_perf 00:04:12.529 ************************************ 00:04:12.529 14:14:01 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:12.529 [2024-11-17 14:14:01.448321] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:12.529 [2024-11-17 14:14:01.448472] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1273595 ] 00:04:12.529 [2024-11-17 14:14:01.527702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.529 [2024-11-17 14:14:01.567881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.464 test_start 00:04:13.464 test_end 00:04:13.464 Performance: 504258 events per second 00:04:13.464 00:04:13.464 real 0m1.179s 00:04:13.464 user 0m1.101s 00:04:13.464 sys 0m0.075s 00:04:13.464 14:14:02 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.464 14:14:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:13.464 ************************************ 00:04:13.464 END TEST event_reactor_perf 00:04:13.464 ************************************ 00:04:13.464 14:14:02 event -- event/event.sh@49 -- # uname -s 00:04:13.464 14:14:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:13.464 14:14:02 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:13.464 14:14:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.464 14:14:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.464 14:14:02 event -- common/autotest_common.sh@10 -- # set +x 00:04:13.464 ************************************ 00:04:13.464 START TEST event_scheduler 00:04:13.464 ************************************ 00:04:13.464 14:14:02 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:13.724 * Looking for test storage... 00:04:13.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:13.724 14:14:02 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:13.724 14:14:02 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:13.724 14:14:02 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:13.724 14:14:02 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.724 14:14:02 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:13.724 14:14:02 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.724 14:14:02 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.724 --rc genhtml_branch_coverage=1 00:04:13.724 --rc genhtml_function_coverage=1 00:04:13.724 --rc genhtml_legend=1 00:04:13.724 --rc geninfo_all_blocks=1 00:04:13.724 --rc geninfo_unexecuted_blocks=1 00:04:13.724 00:04:13.724 ' 00:04:13.724 14:14:02 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.724 --rc genhtml_branch_coverage=1 00:04:13.724 --rc genhtml_function_coverage=1 00:04:13.724 --rc genhtml_legend=1 00:04:13.724 --rc geninfo_all_blocks=1 00:04:13.724 --rc geninfo_unexecuted_blocks=1 00:04:13.724 00:04:13.724 ' 00:04:13.724 14:14:02 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.724 --rc genhtml_branch_coverage=1 00:04:13.724 --rc genhtml_function_coverage=1 00:04:13.724 --rc genhtml_legend=1 00:04:13.724 --rc geninfo_all_blocks=1 00:04:13.724 --rc geninfo_unexecuted_blocks=1 00:04:13.724 00:04:13.724 ' 00:04:13.724 14:14:02 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.724 --rc genhtml_branch_coverage=1 00:04:13.724 --rc genhtml_function_coverage=1 00:04:13.724 --rc genhtml_legend=1 00:04:13.724 --rc geninfo_all_blocks=1 00:04:13.724 --rc geninfo_unexecuted_blocks=1 00:04:13.724 00:04:13.724 ' 00:04:13.724 14:14:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:13.724 14:14:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1273876 00:04:13.724 14:14:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:13.724 14:14:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.724 14:14:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1273876 00:04:13.724 14:14:02 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1273876 ']' 00:04:13.724 14:14:02 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.724 14:14:02 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.724 14:14:02 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.724 14:14:02 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.724 14:14:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:13.724 [2024-11-17 14:14:02.905095] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:13.724 [2024-11-17 14:14:02.905141] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1273876 ] 00:04:13.983 [2024-11-17 14:14:02.977001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:13.983 [2024-11-17 14:14:03.020466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.983 [2024-11-17 14:14:03.020577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.983 [2024-11-17 14:14:03.020684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:13.983 [2024-11-17 14:14:03.020685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:13.983 14:14:03 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.983 14:14:03 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:13.983 14:14:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:13.983 14:14:03 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.983 14:14:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:13.983 [2024-11-17 14:14:03.069191] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:13.983 [2024-11-17 14:14:03.069207] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:13.983 [2024-11-17 14:14:03.069216] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:13.983 [2024-11-17 14:14:03.069222] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:13.983 [2024-11-17 14:14:03.069227] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:13.983 14:14:03 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.983 14:14:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:13.983 14:14:03 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.983 14:14:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:13.983 [2024-11-17 14:14:03.142992] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:13.983 14:14:03 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.983 14:14:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:13.983 14:14:03 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.983 14:14:03 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.983 14:14:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:13.983 ************************************ 00:04:13.983 START TEST scheduler_create_thread 00:04:13.983 ************************************ 00:04:13.983 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:13.983 14:14:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:13.983 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.983 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:13.983 2 00:04:13.983 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.983 14:14:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:13.983 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.983 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:13.983 3 00:04:13.983 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.983 14:14:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:13.983 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.983 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.242 4 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.242 5 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.242 6 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.242 7 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.242 8 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.242 9 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.242 10 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.242 14:14:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.176 14:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.176 14:14:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:15.176 14:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.176 14:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.549 14:14:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.549 14:14:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:16.549 14:14:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:16.549 14:14:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.549 14:14:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.483 14:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.483 00:04:17.483 real 0m3.380s 00:04:17.483 user 0m0.026s 00:04:17.483 sys 0m0.004s 00:04:17.483 14:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.484 14:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.484 ************************************ 00:04:17.484 END TEST scheduler_create_thread 00:04:17.484 ************************************ 00:04:17.484 14:14:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:17.484 14:14:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1273876 00:04:17.484 14:14:06 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1273876 ']' 00:04:17.484 14:14:06 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1273876 00:04:17.484 14:14:06 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:17.484 14:14:06 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:17.484 14:14:06 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1273876 00:04:17.484 14:14:06 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:17.484 14:14:06 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:17.484 14:14:06 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1273876' 00:04:17.484 killing process with pid 1273876 00:04:17.484 14:14:06 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1273876 00:04:17.484 14:14:06 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1273876 00:04:17.741 [2024-11-17 14:14:06.934948] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:18.000 00:04:18.000 real 0m4.460s 00:04:18.000 user 0m7.825s 00:04:18.000 sys 0m0.353s 00:04:18.000 14:14:07 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.000 14:14:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:18.000 ************************************ 00:04:18.000 END TEST event_scheduler 00:04:18.000 ************************************ 00:04:18.000 14:14:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:18.000 14:14:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:18.000 14:14:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.000 14:14:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.000 14:14:07 event -- common/autotest_common.sh@10 -- # set +x 00:04:18.000 ************************************ 00:04:18.000 START TEST app_repeat 00:04:18.000 ************************************ 00:04:18.000 14:14:07 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:18.000 14:14:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.000 14:14:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.000 14:14:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:18.000 14:14:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.000 14:14:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:18.000 14:14:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:18.000 14:14:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:18.000 14:14:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1274621 00:04:18.001 14:14:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.001 14:14:07 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:18.001 14:14:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1274621' 00:04:18.001 Process app_repeat pid: 1274621 00:04:18.001 14:14:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:18.260 14:14:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:18.260 spdk_app_start Round 0 00:04:18.260 14:14:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1274621 /var/tmp/spdk-nbd.sock 00:04:18.260 14:14:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1274621 ']' 00:04:18.260 14:14:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:18.260 14:14:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.260 14:14:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:18.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:18.260 14:14:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.260 14:14:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:18.260 [2024-11-17 14:14:07.248388] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:18.260 [2024-11-17 14:14:07.248439] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1274621 ] 00:04:18.260 [2024-11-17 14:14:07.324215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:18.260 [2024-11-17 14:14:07.365244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.260 [2024-11-17 14:14:07.365244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.260 14:14:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.260 14:14:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:18.260 14:14:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:18.519 Malloc0 00:04:18.519 14:14:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:18.779 Malloc1 00:04:18.779 14:14:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:18.779 14:14:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.779 14:14:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.779 14:14:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:18.779 14:14:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.779 14:14:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:18.779 14:14:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:18.779 14:14:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.779 14:14:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.779 14:14:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:18.779 14:14:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.779 14:14:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:18.779 14:14:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:18.779 14:14:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:18.779 14:14:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.779 14:14:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:19.038 /dev/nbd0 00:04:19.038 14:14:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:19.038 14:14:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:19.038 14:14:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:19.038 14:14:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:19.038 14:14:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:19.038 14:14:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:19.038 14:14:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:19.038 14:14:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:19.038 14:14:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:19.038 14:14:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:19.038 14:14:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.038 1+0 records in 00:04:19.038 1+0 records out 00:04:19.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188079 s, 21.8 MB/s 00:04:19.038 14:14:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.038 14:14:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:19.038 14:14:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.038 14:14:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:19.038 14:14:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:19.038 14:14:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.038 14:14:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.038 14:14:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:19.297 /dev/nbd1 00:04:19.297 14:14:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:19.297 14:14:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:19.297 14:14:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:19.297 14:14:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:19.297 14:14:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:19.297 14:14:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:19.297 14:14:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:19.297 14:14:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:19.297 14:14:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:19.297 14:14:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:19.297 14:14:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.297 1+0 records in 00:04:19.297 1+0 records out 00:04:19.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384387 s, 10.7 MB/s 00:04:19.297 14:14:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.297 14:14:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:19.297 14:14:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.297 14:14:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:19.297 14:14:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:19.297 14:14:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.297 14:14:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.297 14:14:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:19.297 14:14:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.297 14:14:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:19.556 { 00:04:19.556 "nbd_device": "/dev/nbd0", 00:04:19.556 "bdev_name": "Malloc0" 00:04:19.556 }, 00:04:19.556 { 00:04:19.556 "nbd_device": "/dev/nbd1", 00:04:19.556 "bdev_name": "Malloc1" 00:04:19.556 } 00:04:19.556 ]' 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:19.556 { 00:04:19.556 "nbd_device": "/dev/nbd0", 00:04:19.556 "bdev_name": "Malloc0" 00:04:19.556 }, 00:04:19.556 { 00:04:19.556 "nbd_device": "/dev/nbd1", 00:04:19.556 "bdev_name": "Malloc1" 00:04:19.556 } 00:04:19.556 ]' 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:19.556 /dev/nbd1' 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:19.556 /dev/nbd1' 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:19.556 256+0 records in 00:04:19.556 256+0 records out 00:04:19.556 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106969 s, 98.0 MB/s 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:19.556 256+0 records in 00:04:19.556 256+0 records out 00:04:19.556 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146967 s, 71.3 MB/s 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:19.556 256+0 records in 00:04:19.556 256+0 records out 00:04:19.556 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151368 s, 69.3 MB/s 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.556 14:14:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.557 14:14:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:19.557 14:14:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:19.557 14:14:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:19.557 14:14:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:19.815 14:14:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:19.815 14:14:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:19.815 14:14:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:19.815 14:14:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:19.815 14:14:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:19.815 14:14:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:19.815 14:14:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:19.815 14:14:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:19.815 14:14:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:19.815 14:14:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:20.074 14:14:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:20.074 14:14:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:20.074 14:14:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:20.074 14:14:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.074 14:14:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.074 14:14:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:20.074 14:14:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:20.074 14:14:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.074 14:14:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:20.074 14:14:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.074 14:14:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:20.332 14:14:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:20.332 14:14:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:20.332 14:14:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:20.332 14:14:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:20.332 14:14:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:20.332 14:14:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:20.332 14:14:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:20.332 14:14:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:20.332 14:14:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:20.332 14:14:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:20.332 14:14:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:20.332 14:14:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:20.332 14:14:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:20.590 14:14:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:20.590 [2024-11-17 14:14:09.758978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:20.590 [2024-11-17 14:14:09.796825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.590 [2024-11-17 14:14:09.796826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.849 [2024-11-17 14:14:09.837907] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:20.849 [2024-11-17 14:14:09.837948] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:24.134 14:14:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:24.134 14:14:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:24.134 spdk_app_start Round 1 00:04:24.134 14:14:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1274621 /var/tmp/spdk-nbd.sock 00:04:24.134 14:14:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1274621 ']' 00:04:24.134 14:14:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:24.134 14:14:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.134 14:14:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:24.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:24.134 14:14:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.134 14:14:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:24.134 14:14:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.134 14:14:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:24.134 14:14:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.134 Malloc0 00:04:24.134 14:14:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.134 Malloc1 00:04:24.134 14:14:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.134 14:14:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.134 14:14:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.134 14:14:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:24.134 14:14:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.134 14:14:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:24.134 14:14:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.134 14:14:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.134 14:14:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.134 14:14:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:24.134 14:14:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.134 14:14:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:24.135 14:14:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:24.135 14:14:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:24.135 14:14:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.135 14:14:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:24.393 /dev/nbd0 00:04:24.393 14:14:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:24.393 14:14:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:24.393 14:14:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:24.393 14:14:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:24.393 14:14:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:24.393 14:14:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:24.393 14:14:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:24.394 14:14:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:24.394 14:14:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:24.394 14:14:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:24.394 14:14:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.394 1+0 records in 00:04:24.394 1+0 records out 00:04:24.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202375 s, 20.2 MB/s 00:04:24.394 14:14:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.394 14:14:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:24.394 14:14:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.394 14:14:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:24.394 14:14:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:24.394 14:14:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.394 14:14:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.394 14:14:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:24.653 /dev/nbd1 00:04:24.653 14:14:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:24.653 14:14:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:24.653 14:14:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:24.653 14:14:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:24.653 14:14:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:24.653 14:14:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:24.653 14:14:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:24.653 14:14:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:24.653 14:14:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:24.653 14:14:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:24.653 14:14:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.653 1+0 records in 00:04:24.653 1+0 records out 00:04:24.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223408 s, 18.3 MB/s 00:04:24.653 14:14:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.653 14:14:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:24.653 14:14:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.653 14:14:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:24.653 14:14:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:24.653 14:14:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.653 14:14:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.653 14:14:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.653 14:14:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.653 14:14:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.912 14:14:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:24.912 { 00:04:24.912 "nbd_device": "/dev/nbd0", 00:04:24.912 "bdev_name": "Malloc0" 00:04:24.912 }, 00:04:24.912 { 00:04:24.912 "nbd_device": "/dev/nbd1", 00:04:24.912 "bdev_name": "Malloc1" 00:04:24.912 } 00:04:24.912 ]' 00:04:24.912 14:14:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:24.912 { 00:04:24.912 "nbd_device": "/dev/nbd0", 00:04:24.912 "bdev_name": "Malloc0" 00:04:24.912 }, 00:04:24.912 { 00:04:24.912 "nbd_device": "/dev/nbd1", 00:04:24.912 "bdev_name": "Malloc1" 00:04:24.912 } 00:04:24.912 ]' 00:04:24.912 14:14:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.912 14:14:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:24.912 /dev/nbd1' 00:04:24.912 14:14:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:24.912 /dev/nbd1' 00:04:24.912 14:14:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.912 14:14:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:24.912 14:14:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:24.912 14:14:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:24.912 14:14:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:24.912 14:14:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:24.912 14:14:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.912 14:14:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.912 14:14:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:24.912 14:14:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.912 14:14:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:24.912 14:14:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:24.912 256+0 records in 00:04:24.912 256+0 records out 00:04:24.912 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106171 s, 98.8 MB/s 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:24.912 256+0 records in 00:04:24.912 256+0 records out 00:04:24.912 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141685 s, 74.0 MB/s 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:24.912 256+0 records in 00:04:24.912 256+0 records out 00:04:24.912 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151382 s, 69.3 MB/s 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.912 14:14:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:25.171 14:14:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:25.171 14:14:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:25.171 14:14:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:25.171 14:14:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.171 14:14:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.171 14:14:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:25.171 14:14:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.171 14:14:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.171 14:14:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.171 14:14:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:25.430 14:14:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:25.430 14:14:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:25.430 14:14:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:25.430 14:14:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.430 14:14:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.430 14:14:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:25.430 14:14:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.430 14:14:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.430 14:14:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.430 14:14:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.430 14:14:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.688 14:14:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:25.688 14:14:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:25.688 14:14:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.688 14:14:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:25.688 14:14:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:25.688 14:14:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.689 14:14:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:25.689 14:14:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:25.689 14:14:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:25.689 14:14:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:25.689 14:14:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:25.689 14:14:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:25.689 14:14:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:25.948 14:14:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:25.948 [2024-11-17 14:14:15.103063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:25.948 [2024-11-17 14:14:15.140805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.948 [2024-11-17 14:14:15.140805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.207 [2024-11-17 14:14:15.182854] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:26.207 [2024-11-17 14:14:15.182906] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:29.492 14:14:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:29.492 14:14:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:29.492 spdk_app_start Round 2 00:04:29.492 14:14:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1274621 /var/tmp/spdk-nbd.sock 00:04:29.492 14:14:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1274621 ']' 00:04:29.492 14:14:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:29.492 14:14:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.492 14:14:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:29.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:29.492 14:14:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.492 14:14:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:29.492 14:14:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.492 14:14:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:29.492 14:14:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.492 Malloc0 00:04:29.492 14:14:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.492 Malloc1 00:04:29.492 14:14:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.492 14:14:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.492 14:14:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.492 14:14:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:29.492 14:14:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.492 14:14:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:29.492 14:14:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.492 14:14:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.492 14:14:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.492 14:14:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:29.492 14:14:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.492 14:14:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:29.492 14:14:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:29.492 14:14:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:29.492 14:14:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.492 14:14:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:29.750 /dev/nbd0 00:04:29.750 14:14:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:29.750 14:14:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:29.750 14:14:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:29.750 14:14:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:29.750 14:14:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:29.750 14:14:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:29.750 14:14:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:29.750 14:14:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:29.750 14:14:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:29.750 14:14:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:29.750 14:14:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.750 1+0 records in 00:04:29.750 1+0 records out 00:04:29.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 9.1673e-05 s, 44.7 MB/s 00:04:29.750 14:14:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.750 14:14:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:29.750 14:14:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.750 14:14:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:29.750 14:14:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:29.750 14:14:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.750 14:14:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.750 14:14:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:30.010 /dev/nbd1 00:04:30.010 14:14:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:30.010 14:14:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:30.010 14:14:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:30.010 14:14:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:30.010 14:14:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:30.010 14:14:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:30.010 14:14:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:30.010 14:14:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:30.010 14:14:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:30.010 14:14:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:30.010 14:14:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.010 1+0 records in 00:04:30.010 1+0 records out 00:04:30.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254393 s, 16.1 MB/s 00:04:30.010 14:14:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.010 14:14:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:30.010 14:14:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.010 14:14:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:30.010 14:14:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:30.010 14:14:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.010 14:14:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.010 14:14:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.010 14:14:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.010 14:14:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:30.269 { 00:04:30.269 "nbd_device": "/dev/nbd0", 00:04:30.269 "bdev_name": "Malloc0" 00:04:30.269 }, 00:04:30.269 { 00:04:30.269 "nbd_device": "/dev/nbd1", 00:04:30.269 "bdev_name": "Malloc1" 00:04:30.269 } 00:04:30.269 ]' 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:30.269 { 00:04:30.269 "nbd_device": "/dev/nbd0", 00:04:30.269 "bdev_name": "Malloc0" 00:04:30.269 }, 00:04:30.269 { 00:04:30.269 "nbd_device": "/dev/nbd1", 00:04:30.269 "bdev_name": "Malloc1" 00:04:30.269 } 00:04:30.269 ]' 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:30.269 /dev/nbd1' 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:30.269 /dev/nbd1' 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:30.269 256+0 records in 00:04:30.269 256+0 records out 00:04:30.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107186 s, 97.8 MB/s 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:30.269 256+0 records in 00:04:30.269 256+0 records out 00:04:30.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146687 s, 71.5 MB/s 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:30.269 256+0 records in 00:04:30.269 256+0 records out 00:04:30.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157961 s, 66.4 MB/s 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.269 14:14:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:30.527 14:14:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:30.527 14:14:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:30.527 14:14:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:30.527 14:14:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.527 14:14:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.527 14:14:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:30.527 14:14:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:30.527 14:14:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.527 14:14:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.527 14:14:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:30.786 14:14:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:30.786 14:14:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:30.786 14:14:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:30.786 14:14:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.786 14:14:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.786 14:14:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:30.786 14:14:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:30.786 14:14:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.786 14:14:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.786 14:14:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.787 14:14:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.787 14:14:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:30.787 14:14:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:30.787 14:14:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.046 14:14:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:31.046 14:14:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.046 14:14:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:31.046 14:14:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:31.046 14:14:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:31.046 14:14:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:31.046 14:14:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:31.046 14:14:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:31.046 14:14:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:31.046 14:14:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:31.046 14:14:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:31.305 [2024-11-17 14:14:20.400691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:31.305 [2024-11-17 14:14:20.439639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.305 [2024-11-17 14:14:20.439639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.305 [2024-11-17 14:14:20.481494] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:31.305 [2024-11-17 14:14:20.481534] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:34.597 14:14:23 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1274621 /var/tmp/spdk-nbd.sock 00:04:34.597 14:14:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1274621 ']' 00:04:34.597 14:14:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:34.597 14:14:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.597 14:14:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:34.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:34.597 14:14:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.597 14:14:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:34.597 14:14:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.597 14:14:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:34.597 14:14:23 event.app_repeat -- event/event.sh@39 -- # killprocess 1274621 00:04:34.597 14:14:23 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1274621 ']' 00:04:34.597 14:14:23 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1274621 00:04:34.597 14:14:23 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:34.597 14:14:23 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.597 14:14:23 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1274621 00:04:34.597 14:14:23 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.597 14:14:23 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.597 14:14:23 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1274621' 00:04:34.597 killing process with pid 1274621 00:04:34.597 14:14:23 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1274621 00:04:34.597 14:14:23 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1274621 00:04:34.597 spdk_app_start is called in Round 0. 00:04:34.597 Shutdown signal received, stop current app iteration 00:04:34.597 Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 reinitialization... 00:04:34.597 spdk_app_start is called in Round 1. 00:04:34.597 Shutdown signal received, stop current app iteration 00:04:34.597 Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 reinitialization... 00:04:34.597 spdk_app_start is called in Round 2. 00:04:34.597 Shutdown signal received, stop current app iteration 00:04:34.597 Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 reinitialization... 00:04:34.597 spdk_app_start is called in Round 3. 00:04:34.597 Shutdown signal received, stop current app iteration 00:04:34.597 14:14:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:34.597 14:14:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:34.597 00:04:34.597 real 0m16.452s 00:04:34.597 user 0m36.167s 00:04:34.597 sys 0m2.554s 00:04:34.597 14:14:23 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.597 14:14:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:34.597 ************************************ 00:04:34.597 END TEST app_repeat 00:04:34.597 ************************************ 00:04:34.597 14:14:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:34.597 14:14:23 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:34.597 14:14:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.597 14:14:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.597 14:14:23 event -- common/autotest_common.sh@10 -- # set +x 00:04:34.597 ************************************ 00:04:34.597 START TEST cpu_locks 00:04:34.597 ************************************ 00:04:34.597 14:14:23 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:34.857 * Looking for test storage... 00:04:34.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:34.857 14:14:23 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:34.857 14:14:23 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:34.857 14:14:23 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:34.857 14:14:23 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.857 14:14:23 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:34.857 14:14:23 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.857 14:14:23 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.857 --rc genhtml_branch_coverage=1 00:04:34.857 --rc genhtml_function_coverage=1 00:04:34.857 --rc genhtml_legend=1 00:04:34.857 --rc geninfo_all_blocks=1 00:04:34.857 --rc geninfo_unexecuted_blocks=1 00:04:34.857 00:04:34.857 ' 00:04:34.857 14:14:23 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.857 --rc genhtml_branch_coverage=1 00:04:34.857 --rc genhtml_function_coverage=1 00:04:34.857 --rc genhtml_legend=1 00:04:34.857 --rc geninfo_all_blocks=1 00:04:34.857 --rc geninfo_unexecuted_blocks=1 00:04:34.857 00:04:34.857 ' 00:04:34.857 14:14:23 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.857 --rc genhtml_branch_coverage=1 00:04:34.857 --rc genhtml_function_coverage=1 00:04:34.857 --rc genhtml_legend=1 00:04:34.857 --rc geninfo_all_blocks=1 00:04:34.857 --rc geninfo_unexecuted_blocks=1 00:04:34.857 00:04:34.857 ' 00:04:34.857 14:14:23 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.857 --rc genhtml_branch_coverage=1 00:04:34.857 --rc genhtml_function_coverage=1 00:04:34.857 --rc genhtml_legend=1 00:04:34.857 --rc geninfo_all_blocks=1 00:04:34.857 --rc geninfo_unexecuted_blocks=1 00:04:34.857 00:04:34.857 ' 00:04:34.857 14:14:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:34.857 14:14:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:34.857 14:14:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:34.857 14:14:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:34.857 14:14:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.857 14:14:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.857 14:14:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:34.857 ************************************ 00:04:34.857 START TEST default_locks 00:04:34.857 ************************************ 00:04:34.857 14:14:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:34.857 14:14:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1277626 00:04:34.857 14:14:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1277626 00:04:34.857 14:14:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:34.857 14:14:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1277626 ']' 00:04:34.857 14:14:23 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.857 14:14:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.857 14:14:23 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.857 14:14:23 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.857 14:14:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:34.857 [2024-11-17 14:14:24.003053] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:34.857 [2024-11-17 14:14:24.003093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277626 ] 00:04:34.857 [2024-11-17 14:14:24.058552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.116 [2024-11-17 14:14:24.099818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.116 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.116 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:35.116 14:14:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1277626 00:04:35.116 14:14:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1277626 00:04:35.116 14:14:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:35.683 lslocks: write error 00:04:35.683 14:14:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1277626 00:04:35.683 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1277626 ']' 00:04:35.683 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1277626 00:04:35.683 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:35.683 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.683 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1277626 00:04:35.683 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.683 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.683 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1277626' 00:04:35.683 killing process with pid 1277626 00:04:35.683 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1277626 00:04:35.683 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1277626 00:04:35.941 14:14:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1277626 00:04:35.941 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:35.941 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1277626 00:04:35.941 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:35.941 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.941 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:35.941 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.941 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1277626 00:04:35.941 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1277626 ']' 00:04:35.941 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.941 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.941 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.941 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.941 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1277626) - No such process 00:04:35.941 ERROR: process (pid: 1277626) is no longer running 00:04:35.941 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.941 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:35.942 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:35.942 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:35.942 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:35.942 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:35.942 14:14:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:35.942 14:14:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:35.942 14:14:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:35.942 14:14:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:35.942 00:04:35.942 real 0m1.043s 00:04:35.942 user 0m1.014s 00:04:35.942 sys 0m0.469s 00:04:35.942 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.942 14:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.942 ************************************ 00:04:35.942 END TEST default_locks 00:04:35.942 ************************************ 00:04:35.942 14:14:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:35.942 14:14:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.942 14:14:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.942 14:14:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.942 ************************************ 00:04:35.942 START TEST default_locks_via_rpc 00:04:35.942 ************************************ 00:04:35.942 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:35.942 14:14:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1277884 00:04:35.942 14:14:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1277884 00:04:35.942 14:14:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.942 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1277884 ']' 00:04:35.942 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.942 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.942 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.942 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.942 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.942 [2024-11-17 14:14:25.117126] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:35.942 [2024-11-17 14:14:25.117168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277884 ] 00:04:36.201 [2024-11-17 14:14:25.193558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.201 [2024-11-17 14:14:25.236063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.459 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.459 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:36.459 14:14:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:36.459 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.459 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.459 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.459 14:14:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:36.459 14:14:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:36.459 14:14:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:36.459 14:14:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:36.459 14:14:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:36.459 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.459 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.459 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.459 14:14:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1277884 00:04:36.459 14:14:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1277884 00:04:36.459 14:14:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:36.719 14:14:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1277884 00:04:36.719 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1277884 ']' 00:04:36.719 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1277884 00:04:36.719 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:36.719 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.719 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1277884 00:04:36.719 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.719 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.719 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1277884' 00:04:36.719 killing process with pid 1277884 00:04:36.719 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1277884 00:04:36.719 14:14:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1277884 00:04:37.286 00:04:37.286 real 0m1.154s 00:04:37.286 user 0m1.120s 00:04:37.286 sys 0m0.509s 00:04:37.286 14:14:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.286 14:14:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.286 ************************************ 00:04:37.286 END TEST default_locks_via_rpc 00:04:37.286 ************************************ 00:04:37.286 14:14:26 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:37.286 14:14:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.286 14:14:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.286 14:14:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:37.286 ************************************ 00:04:37.286 START TEST non_locking_app_on_locked_coremask 00:04:37.286 ************************************ 00:04:37.286 14:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:37.286 14:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1278138 00:04:37.286 14:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.286 14:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1278138 /var/tmp/spdk.sock 00:04:37.286 14:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1278138 ']' 00:04:37.286 14:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.286 14:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.287 14:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.287 14:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.287 14:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.287 [2024-11-17 14:14:26.341092] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:37.287 [2024-11-17 14:14:26.341134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278138 ] 00:04:37.287 [2024-11-17 14:14:26.417785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.287 [2024-11-17 14:14:26.460314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.546 14:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.546 14:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:37.546 14:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1278141 00:04:37.546 14:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1278141 /var/tmp/spdk2.sock 00:04:37.546 14:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:37.546 14:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1278141 ']' 00:04:37.546 14:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:37.546 14:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.546 14:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:37.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:37.546 14:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.546 14:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.546 [2024-11-17 14:14:26.724456] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:37.546 [2024-11-17 14:14:26.724501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278141 ] 00:04:37.805 [2024-11-17 14:14:26.819563] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:37.805 [2024-11-17 14:14:26.819585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.805 [2024-11-17 14:14:26.896605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.378 14:14:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.378 14:14:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:38.378 14:14:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1278138 00:04:38.378 14:14:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:38.378 14:14:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1278138 00:04:39.315 lslocks: write error 00:04:39.315 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1278138 00:04:39.315 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1278138 ']' 00:04:39.315 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1278138 00:04:39.315 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:39.315 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.315 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1278138 00:04:39.315 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.315 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.315 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1278138' 00:04:39.315 killing process with pid 1278138 00:04:39.315 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1278138 00:04:39.315 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1278138 00:04:39.882 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1278141 00:04:39.882 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1278141 ']' 00:04:39.883 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1278141 00:04:39.883 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:39.883 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.883 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1278141 00:04:39.883 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.883 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.883 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1278141' 00:04:39.883 killing process with pid 1278141 00:04:39.883 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1278141 00:04:39.883 14:14:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1278141 00:04:40.145 00:04:40.145 real 0m2.936s 00:04:40.145 user 0m3.097s 00:04:40.145 sys 0m0.959s 00:04:40.145 14:14:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.145 14:14:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.145 ************************************ 00:04:40.145 END TEST non_locking_app_on_locked_coremask 00:04:40.145 ************************************ 00:04:40.145 14:14:29 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:40.145 14:14:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.145 14:14:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.145 14:14:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.145 ************************************ 00:04:40.145 START TEST locking_app_on_unlocked_coremask 00:04:40.145 ************************************ 00:04:40.145 14:14:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:40.145 14:14:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1278638 00:04:40.145 14:14:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1278638 /var/tmp/spdk.sock 00:04:40.145 14:14:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:40.145 14:14:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1278638 ']' 00:04:40.145 14:14:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.145 14:14:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.145 14:14:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.145 14:14:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.145 14:14:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.145 [2024-11-17 14:14:29.351051] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:40.145 [2024-11-17 14:14:29.351095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278638 ] 00:04:40.404 [2024-11-17 14:14:29.422745] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:40.404 [2024-11-17 14:14:29.422770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.404 [2024-11-17 14:14:29.460654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.662 14:14:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.662 14:14:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:40.662 14:14:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1278645 00:04:40.662 14:14:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1278645 /var/tmp/spdk2.sock 00:04:40.662 14:14:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:40.662 14:14:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1278645 ']' 00:04:40.662 14:14:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:40.662 14:14:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.662 14:14:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:40.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:40.662 14:14:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.662 14:14:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.662 [2024-11-17 14:14:29.735592] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:40.662 [2024-11-17 14:14:29.735639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278645 ] 00:04:40.662 [2024-11-17 14:14:29.827638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.921 [2024-11-17 14:14:29.909398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.487 14:14:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.487 14:14:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:41.487 14:14:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1278645 00:04:41.487 14:14:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1278645 00:04:41.487 14:14:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:42.054 lslocks: write error 00:04:42.054 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1278638 00:04:42.054 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1278638 ']' 00:04:42.054 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1278638 00:04:42.054 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:42.054 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.054 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1278638 00:04:42.054 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.054 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.054 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1278638' 00:04:42.054 killing process with pid 1278638 00:04:42.054 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1278638 00:04:42.054 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1278638 00:04:42.622 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1278645 00:04:42.622 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1278645 ']' 00:04:42.622 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1278645 00:04:42.622 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:42.622 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.622 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1278645 00:04:42.881 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.881 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.881 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1278645' 00:04:42.881 killing process with pid 1278645 00:04:42.881 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1278645 00:04:42.881 14:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1278645 00:04:43.140 00:04:43.140 real 0m2.861s 00:04:43.140 user 0m3.030s 00:04:43.140 sys 0m0.922s 00:04:43.140 14:14:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.140 14:14:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.141 ************************************ 00:04:43.141 END TEST locking_app_on_unlocked_coremask 00:04:43.141 ************************************ 00:04:43.141 14:14:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:43.141 14:14:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.141 14:14:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.141 14:14:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.141 ************************************ 00:04:43.141 START TEST locking_app_on_locked_coremask 00:04:43.141 ************************************ 00:04:43.141 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:43.141 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1279140 00:04:43.141 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1279140 /var/tmp/spdk.sock 00:04:43.141 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.141 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1279140 ']' 00:04:43.141 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.141 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.141 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.141 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.141 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.141 [2024-11-17 14:14:32.277197] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:43.141 [2024-11-17 14:14:32.277240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279140 ] 00:04:43.141 [2024-11-17 14:14:32.352819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.399 [2024-11-17 14:14:32.392189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.658 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.658 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:43.658 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1279146 00:04:43.658 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1279146 /var/tmp/spdk2.sock 00:04:43.658 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:43.658 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:43.658 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1279146 /var/tmp/spdk2.sock 00:04:43.658 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:43.658 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.658 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:43.658 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.658 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1279146 /var/tmp/spdk2.sock 00:04:43.658 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1279146 ']' 00:04:43.658 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:43.658 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.658 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:43.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:43.658 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.658 14:14:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.658 [2024-11-17 14:14:32.678256] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:43.658 [2024-11-17 14:14:32.678301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279146 ] 00:04:43.658 [2024-11-17 14:14:32.767763] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1279140 has claimed it. 00:04:43.658 [2024-11-17 14:14:32.767801] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:44.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1279146) - No such process 00:04:44.225 ERROR: process (pid: 1279146) is no longer running 00:04:44.225 14:14:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.225 14:14:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:44.225 14:14:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:44.225 14:14:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:44.225 14:14:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:44.225 14:14:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:44.225 14:14:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1279140 00:04:44.225 14:14:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1279140 00:04:44.225 14:14:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:44.793 lslocks: write error 00:04:44.793 14:14:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1279140 00:04:44.793 14:14:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1279140 ']' 00:04:44.793 14:14:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1279140 00:04:44.793 14:14:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:44.793 14:14:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.793 14:14:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1279140 00:04:44.793 14:14:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.793 14:14:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.793 14:14:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1279140' 00:04:44.793 killing process with pid 1279140 00:04:44.793 14:14:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1279140 00:04:44.793 14:14:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1279140 00:04:45.052 00:04:45.052 real 0m1.947s 00:04:45.052 user 0m2.065s 00:04:45.052 sys 0m0.656s 00:04:45.052 14:14:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.052 14:14:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.052 ************************************ 00:04:45.052 END TEST locking_app_on_locked_coremask 00:04:45.052 ************************************ 00:04:45.052 14:14:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:45.052 14:14:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.052 14:14:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.052 14:14:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.052 ************************************ 00:04:45.052 START TEST locking_overlapped_coremask 00:04:45.052 ************************************ 00:04:45.052 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:45.052 14:14:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1279429 00:04:45.052 14:14:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1279429 /var/tmp/spdk.sock 00:04:45.052 14:14:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:45.052 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1279429 ']' 00:04:45.052 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.052 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.052 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.052 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.052 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.310 [2024-11-17 14:14:34.292003] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:45.311 [2024-11-17 14:14:34.292046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279429 ] 00:04:45.311 [2024-11-17 14:14:34.367279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:45.311 [2024-11-17 14:14:34.412582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.311 [2024-11-17 14:14:34.412694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.311 [2024-11-17 14:14:34.412695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.569 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.569 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:45.569 14:14:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1279633 00:04:45.569 14:14:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1279633 /var/tmp/spdk2.sock 00:04:45.569 14:14:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:45.569 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:45.569 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1279633 /var/tmp/spdk2.sock 00:04:45.569 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:45.569 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.569 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:45.569 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.569 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1279633 /var/tmp/spdk2.sock 00:04:45.569 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1279633 ']' 00:04:45.569 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:45.569 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.569 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:45.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:45.569 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.569 14:14:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.569 [2024-11-17 14:14:34.680721] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:45.569 [2024-11-17 14:14:34.680767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279633 ] 00:04:45.569 [2024-11-17 14:14:34.773376] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1279429 has claimed it. 00:04:45.569 [2024-11-17 14:14:34.773412] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:46.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1279633) - No such process 00:04:46.135 ERROR: process (pid: 1279633) is no longer running 00:04:46.135 14:14:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.136 14:14:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:46.136 14:14:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:46.136 14:14:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:46.136 14:14:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:46.136 14:14:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:46.136 14:14:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:46.136 14:14:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:46.136 14:14:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:46.136 14:14:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:46.136 14:14:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1279429 00:04:46.136 14:14:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1279429 ']' 00:04:46.136 14:14:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1279429 00:04:46.136 14:14:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:46.136 14:14:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.136 14:14:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1279429 00:04:46.394 14:14:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.394 14:14:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.394 14:14:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1279429' 00:04:46.394 killing process with pid 1279429 00:04:46.394 14:14:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1279429 00:04:46.394 14:14:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1279429 00:04:46.653 00:04:46.653 real 0m1.433s 00:04:46.653 user 0m3.952s 00:04:46.653 sys 0m0.380s 00:04:46.653 14:14:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.653 14:14:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.653 ************************************ 00:04:46.653 END TEST locking_overlapped_coremask 00:04:46.653 ************************************ 00:04:46.653 14:14:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:46.653 14:14:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.653 14:14:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.653 14:14:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.653 ************************************ 00:04:46.653 START TEST locking_overlapped_coremask_via_rpc 00:04:46.653 ************************************ 00:04:46.653 14:14:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:46.653 14:14:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1279768 00:04:46.653 14:14:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1279768 /var/tmp/spdk.sock 00:04:46.653 14:14:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:46.653 14:14:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1279768 ']' 00:04:46.653 14:14:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.653 14:14:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.653 14:14:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.653 14:14:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.653 14:14:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.653 [2024-11-17 14:14:35.797019] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:46.653 [2024-11-17 14:14:35.797063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279768 ] 00:04:46.653 [2024-11-17 14:14:35.872766] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:46.653 [2024-11-17 14:14:35.872792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:46.912 [2024-11-17 14:14:35.917917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.912 [2024-11-17 14:14:35.918026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.912 [2024-11-17 14:14:35.918026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.912 14:14:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.912 14:14:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:46.912 14:14:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1279895 00:04:46.912 14:14:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1279895 /var/tmp/spdk2.sock 00:04:46.912 14:14:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:46.912 14:14:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1279895 ']' 00:04:46.912 14:14:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:46.912 14:14:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.912 14:14:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:46.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:46.912 14:14:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.912 14:14:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.170 [2024-11-17 14:14:36.181673] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:47.171 [2024-11-17 14:14:36.181720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279895 ] 00:04:47.171 [2024-11-17 14:14:36.274219] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:47.171 [2024-11-17 14:14:36.274242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:47.171 [2024-11-17 14:14:36.362020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:47.171 [2024-11-17 14:14:36.365397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.171 [2024-11-17 14:14:36.365398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.105 [2024-11-17 14:14:37.027428] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1279768 has claimed it. 00:04:48.105 request: 00:04:48.105 { 00:04:48.105 "method": "framework_enable_cpumask_locks", 00:04:48.105 "req_id": 1 00:04:48.105 } 00:04:48.105 Got JSON-RPC error response 00:04:48.105 response: 00:04:48.105 { 00:04:48.105 "code": -32603, 00:04:48.105 "message": "Failed to claim CPU core: 2" 00:04:48.105 } 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1279768 /var/tmp/spdk.sock 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1279768 ']' 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1279895 /var/tmp/spdk2.sock 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1279895 ']' 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:48.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.105 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.364 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.364 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:48.364 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:48.364 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:48.364 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:48.364 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:48.364 00:04:48.364 real 0m1.726s 00:04:48.364 user 0m0.849s 00:04:48.364 sys 0m0.127s 00:04:48.364 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.364 14:14:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.364 ************************************ 00:04:48.364 END TEST locking_overlapped_coremask_via_rpc 00:04:48.364 ************************************ 00:04:48.364 14:14:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:48.364 14:14:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1279768 ]] 00:04:48.364 14:14:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1279768 00:04:48.364 14:14:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1279768 ']' 00:04:48.364 14:14:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1279768 00:04:48.364 14:14:37 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:48.364 14:14:37 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.364 14:14:37 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1279768 00:04:48.364 14:14:37 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.364 14:14:37 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.364 14:14:37 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1279768' 00:04:48.364 killing process with pid 1279768 00:04:48.364 14:14:37 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1279768 00:04:48.364 14:14:37 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1279768 00:04:48.930 14:14:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1279895 ]] 00:04:48.930 14:14:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1279895 00:04:48.930 14:14:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1279895 ']' 00:04:48.930 14:14:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1279895 00:04:48.930 14:14:37 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:48.930 14:14:37 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.930 14:14:37 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1279895 00:04:48.930 14:14:37 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:48.930 14:14:37 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:48.930 14:14:37 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1279895' 00:04:48.930 killing process with pid 1279895 00:04:48.930 14:14:37 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1279895 00:04:48.930 14:14:37 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1279895 00:04:49.190 14:14:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:49.190 14:14:38 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:49.190 14:14:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1279768 ]] 00:04:49.190 14:14:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1279768 00:04:49.190 14:14:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1279768 ']' 00:04:49.190 14:14:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1279768 00:04:49.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1279768) - No such process 00:04:49.190 14:14:38 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1279768 is not found' 00:04:49.190 Process with pid 1279768 is not found 00:04:49.190 14:14:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1279895 ]] 00:04:49.190 14:14:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1279895 00:04:49.190 14:14:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1279895 ']' 00:04:49.190 14:14:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1279895 00:04:49.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1279895) - No such process 00:04:49.190 14:14:38 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1279895 is not found' 00:04:49.190 Process with pid 1279895 is not found 00:04:49.190 14:14:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:49.190 00:04:49.190 real 0m14.494s 00:04:49.190 user 0m24.941s 00:04:49.190 sys 0m5.001s 00:04:49.190 14:14:38 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.190 14:14:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.190 ************************************ 00:04:49.190 END TEST cpu_locks 00:04:49.190 ************************************ 00:04:49.190 00:04:49.190 real 0m39.561s 00:04:49.190 user 1m15.502s 00:04:49.190 sys 0m8.522s 00:04:49.190 14:14:38 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.190 14:14:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.190 ************************************ 00:04:49.190 END TEST event 00:04:49.190 ************************************ 00:04:49.190 14:14:38 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:49.190 14:14:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.190 14:14:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.190 14:14:38 -- common/autotest_common.sh@10 -- # set +x 00:04:49.190 ************************************ 00:04:49.190 START TEST thread 00:04:49.190 ************************************ 00:04:49.190 14:14:38 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:49.450 * Looking for test storage... 00:04:49.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:49.450 14:14:38 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.450 14:14:38 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.450 14:14:38 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.450 14:14:38 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.450 14:14:38 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.450 14:14:38 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.450 14:14:38 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.450 14:14:38 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.450 14:14:38 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.450 14:14:38 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.450 14:14:38 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.450 14:14:38 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.450 14:14:38 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.450 14:14:38 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.450 14:14:38 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.450 14:14:38 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:49.450 14:14:38 thread -- scripts/common.sh@345 -- # : 1 00:04:49.450 14:14:38 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.450 14:14:38 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.450 14:14:38 thread -- scripts/common.sh@365 -- # decimal 1 00:04:49.450 14:14:38 thread -- scripts/common.sh@353 -- # local d=1 00:04:49.450 14:14:38 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.450 14:14:38 thread -- scripts/common.sh@355 -- # echo 1 00:04:49.450 14:14:38 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.450 14:14:38 thread -- scripts/common.sh@366 -- # decimal 2 00:04:49.450 14:14:38 thread -- scripts/common.sh@353 -- # local d=2 00:04:49.450 14:14:38 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.450 14:14:38 thread -- scripts/common.sh@355 -- # echo 2 00:04:49.451 14:14:38 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.451 14:14:38 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.451 14:14:38 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.451 14:14:38 thread -- scripts/common.sh@368 -- # return 0 00:04:49.451 14:14:38 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.451 14:14:38 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.451 --rc genhtml_branch_coverage=1 00:04:49.451 --rc genhtml_function_coverage=1 00:04:49.451 --rc genhtml_legend=1 00:04:49.451 --rc geninfo_all_blocks=1 00:04:49.451 --rc geninfo_unexecuted_blocks=1 00:04:49.451 00:04:49.451 ' 00:04:49.451 14:14:38 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.451 --rc genhtml_branch_coverage=1 00:04:49.451 --rc genhtml_function_coverage=1 00:04:49.451 --rc genhtml_legend=1 00:04:49.451 --rc geninfo_all_blocks=1 00:04:49.451 --rc geninfo_unexecuted_blocks=1 00:04:49.451 00:04:49.451 ' 00:04:49.451 14:14:38 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.451 --rc genhtml_branch_coverage=1 00:04:49.451 --rc genhtml_function_coverage=1 00:04:49.451 --rc genhtml_legend=1 00:04:49.451 --rc geninfo_all_blocks=1 00:04:49.451 --rc geninfo_unexecuted_blocks=1 00:04:49.451 00:04:49.451 ' 00:04:49.451 14:14:38 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.451 --rc genhtml_branch_coverage=1 00:04:49.451 --rc genhtml_function_coverage=1 00:04:49.451 --rc genhtml_legend=1 00:04:49.451 --rc geninfo_all_blocks=1 00:04:49.451 --rc geninfo_unexecuted_blocks=1 00:04:49.451 00:04:49.451 ' 00:04:49.451 14:14:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:49.451 14:14:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:49.451 14:14:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.451 14:14:38 thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.451 ************************************ 00:04:49.451 START TEST thread_poller_perf 00:04:49.451 ************************************ 00:04:49.451 14:14:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:49.451 [2024-11-17 14:14:38.574084] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:49.451 [2024-11-17 14:14:38.574152] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1280356 ] 00:04:49.451 [2024-11-17 14:14:38.652715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.710 [2024-11-17 14:14:38.694527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.710 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:50.647 [2024-11-17T13:14:39.872Z] ====================================== 00:04:50.647 [2024-11-17T13:14:39.872Z] busy:2307172774 (cyc) 00:04:50.647 [2024-11-17T13:14:39.872Z] total_run_count: 411000 00:04:50.647 [2024-11-17T13:14:39.872Z] tsc_hz: 2300000000 (cyc) 00:04:50.647 [2024-11-17T13:14:39.872Z] ====================================== 00:04:50.647 [2024-11-17T13:14:39.872Z] poller_cost: 5613 (cyc), 2440 (nsec) 00:04:50.647 00:04:50.647 real 0m1.185s 00:04:50.647 user 0m1.113s 00:04:50.647 sys 0m0.068s 00:04:50.647 14:14:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.647 14:14:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:50.647 ************************************ 00:04:50.647 END TEST thread_poller_perf 00:04:50.647 ************************************ 00:04:50.647 14:14:39 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:50.647 14:14:39 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:50.647 14:14:39 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.647 14:14:39 thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.647 ************************************ 00:04:50.647 START TEST thread_poller_perf 00:04:50.647 ************************************ 00:04:50.647 14:14:39 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:50.647 [2024-11-17 14:14:39.830091] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:50.647 [2024-11-17 14:14:39.830163] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1280523 ] 00:04:50.906 [2024-11-17 14:14:39.908235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.906 [2024-11-17 14:14:39.950973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.906 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:51.843 [2024-11-17T13:14:41.068Z] ====================================== 00:04:51.843 [2024-11-17T13:14:41.068Z] busy:2301749782 (cyc) 00:04:51.843 [2024-11-17T13:14:41.068Z] total_run_count: 5208000 00:04:51.843 [2024-11-17T13:14:41.068Z] tsc_hz: 2300000000 (cyc) 00:04:51.843 [2024-11-17T13:14:41.068Z] ====================================== 00:04:51.843 [2024-11-17T13:14:41.068Z] poller_cost: 441 (cyc), 191 (nsec) 00:04:51.843 00:04:51.843 real 0m1.185s 00:04:51.843 user 0m1.100s 00:04:51.843 sys 0m0.080s 00:04:51.843 14:14:40 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.843 14:14:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:51.843 ************************************ 00:04:51.843 END TEST thread_poller_perf 00:04:51.843 ************************************ 00:04:51.843 14:14:41 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:51.843 00:04:51.843 real 0m2.689s 00:04:51.843 user 0m2.375s 00:04:51.843 sys 0m0.328s 00:04:51.843 14:14:41 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.843 14:14:41 thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.843 ************************************ 00:04:51.843 END TEST thread 00:04:51.843 ************************************ 00:04:51.843 14:14:41 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:52.103 14:14:41 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:52.103 14:14:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.103 14:14:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.103 14:14:41 -- common/autotest_common.sh@10 -- # set +x 00:04:52.103 ************************************ 00:04:52.103 START TEST app_cmdline 00:04:52.103 ************************************ 00:04:52.103 14:14:41 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:52.103 * Looking for test storage... 00:04:52.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:52.103 14:14:41 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:52.103 14:14:41 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:04:52.103 14:14:41 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:52.103 14:14:41 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.103 14:14:41 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:52.103 14:14:41 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.103 14:14:41 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:52.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.103 --rc genhtml_branch_coverage=1 00:04:52.103 --rc genhtml_function_coverage=1 00:04:52.103 --rc genhtml_legend=1 00:04:52.103 --rc geninfo_all_blocks=1 00:04:52.103 --rc geninfo_unexecuted_blocks=1 00:04:52.103 00:04:52.103 ' 00:04:52.103 14:14:41 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:52.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.103 --rc genhtml_branch_coverage=1 00:04:52.103 --rc genhtml_function_coverage=1 00:04:52.103 --rc genhtml_legend=1 00:04:52.103 --rc geninfo_all_blocks=1 00:04:52.103 --rc geninfo_unexecuted_blocks=1 00:04:52.103 00:04:52.103 ' 00:04:52.103 14:14:41 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:52.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.103 --rc genhtml_branch_coverage=1 00:04:52.103 --rc genhtml_function_coverage=1 00:04:52.103 --rc genhtml_legend=1 00:04:52.103 --rc geninfo_all_blocks=1 00:04:52.103 --rc geninfo_unexecuted_blocks=1 00:04:52.103 00:04:52.103 ' 00:04:52.103 14:14:41 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:52.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.103 --rc genhtml_branch_coverage=1 00:04:52.103 --rc genhtml_function_coverage=1 00:04:52.103 --rc genhtml_legend=1 00:04:52.104 --rc geninfo_all_blocks=1 00:04:52.104 --rc geninfo_unexecuted_blocks=1 00:04:52.104 00:04:52.104 ' 00:04:52.104 14:14:41 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:52.104 14:14:41 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1280872 00:04:52.104 14:14:41 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1280872 00:04:52.104 14:14:41 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:52.104 14:14:41 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1280872 ']' 00:04:52.104 14:14:41 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.104 14:14:41 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.104 14:14:41 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.104 14:14:41 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.104 14:14:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:52.363 [2024-11-17 14:14:41.335873] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:52.363 [2024-11-17 14:14:41.335922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1280872 ] 00:04:52.363 [2024-11-17 14:14:41.412203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.363 [2024-11-17 14:14:41.454786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.622 14:14:41 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.622 14:14:41 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:52.622 14:14:41 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:52.881 { 00:04:52.881 "version": "SPDK v25.01-pre git sha1 ca87521f7", 00:04:52.881 "fields": { 00:04:52.881 "major": 25, 00:04:52.881 "minor": 1, 00:04:52.881 "patch": 0, 00:04:52.881 "suffix": "-pre", 00:04:52.881 "commit": "ca87521f7" 00:04:52.881 } 00:04:52.881 } 00:04:52.881 14:14:41 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:52.881 14:14:41 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:52.881 14:14:41 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:52.881 14:14:41 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:52.881 14:14:41 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:52.881 14:14:41 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:52.881 14:14:41 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:52.881 14:14:41 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.881 14:14:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:52.881 14:14:41 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.881 14:14:41 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:52.881 14:14:41 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:52.881 14:14:41 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:52.881 14:14:41 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:52.881 14:14:41 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:52.881 14:14:41 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:52.881 14:14:41 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.881 14:14:41 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:52.881 14:14:41 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.881 14:14:41 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:52.881 14:14:41 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.881 14:14:41 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:52.881 14:14:41 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:52.881 14:14:41 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:52.881 request: 00:04:52.881 { 00:04:52.881 "method": "env_dpdk_get_mem_stats", 00:04:52.881 "req_id": 1 00:04:52.881 } 00:04:52.881 Got JSON-RPC error response 00:04:52.881 response: 00:04:52.881 { 00:04:52.881 "code": -32601, 00:04:52.881 "message": "Method not found" 00:04:52.881 } 00:04:52.881 14:14:42 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:52.881 14:14:42 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:52.881 14:14:42 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:52.881 14:14:42 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:52.881 14:14:42 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1280872 00:04:52.881 14:14:42 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1280872 ']' 00:04:52.881 14:14:42 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1280872 00:04:52.881 14:14:42 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:53.140 14:14:42 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.140 14:14:42 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1280872 00:04:53.140 14:14:42 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.140 14:14:42 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.140 14:14:42 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1280872' 00:04:53.140 killing process with pid 1280872 00:04:53.140 14:14:42 app_cmdline -- common/autotest_common.sh@973 -- # kill 1280872 00:04:53.140 14:14:42 app_cmdline -- common/autotest_common.sh@978 -- # wait 1280872 00:04:53.399 00:04:53.399 real 0m1.343s 00:04:53.399 user 0m1.554s 00:04:53.399 sys 0m0.457s 00:04:53.399 14:14:42 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.399 14:14:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:53.399 ************************************ 00:04:53.399 END TEST app_cmdline 00:04:53.399 ************************************ 00:04:53.399 14:14:42 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:53.399 14:14:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.399 14:14:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.399 14:14:42 -- common/autotest_common.sh@10 -- # set +x 00:04:53.399 ************************************ 00:04:53.399 START TEST version 00:04:53.399 ************************************ 00:04:53.399 14:14:42 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:53.399 * Looking for test storage... 00:04:53.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:53.399 14:14:42 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.399 14:14:42 version -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.399 14:14:42 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.658 14:14:42 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.658 14:14:42 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.658 14:14:42 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.658 14:14:42 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.658 14:14:42 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.658 14:14:42 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.658 14:14:42 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.658 14:14:42 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.658 14:14:42 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.658 14:14:42 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.658 14:14:42 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.658 14:14:42 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.658 14:14:42 version -- scripts/common.sh@344 -- # case "$op" in 00:04:53.658 14:14:42 version -- scripts/common.sh@345 -- # : 1 00:04:53.658 14:14:42 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.658 14:14:42 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.658 14:14:42 version -- scripts/common.sh@365 -- # decimal 1 00:04:53.658 14:14:42 version -- scripts/common.sh@353 -- # local d=1 00:04:53.658 14:14:42 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.658 14:14:42 version -- scripts/common.sh@355 -- # echo 1 00:04:53.658 14:14:42 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.658 14:14:42 version -- scripts/common.sh@366 -- # decimal 2 00:04:53.658 14:14:42 version -- scripts/common.sh@353 -- # local d=2 00:04:53.658 14:14:42 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.658 14:14:42 version -- scripts/common.sh@355 -- # echo 2 00:04:53.658 14:14:42 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.658 14:14:42 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.658 14:14:42 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.658 14:14:42 version -- scripts/common.sh@368 -- # return 0 00:04:53.658 14:14:42 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.658 14:14:42 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.658 --rc genhtml_branch_coverage=1 00:04:53.658 --rc genhtml_function_coverage=1 00:04:53.658 --rc genhtml_legend=1 00:04:53.658 --rc geninfo_all_blocks=1 00:04:53.658 --rc geninfo_unexecuted_blocks=1 00:04:53.658 00:04:53.658 ' 00:04:53.658 14:14:42 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.658 --rc genhtml_branch_coverage=1 00:04:53.658 --rc genhtml_function_coverage=1 00:04:53.658 --rc genhtml_legend=1 00:04:53.658 --rc geninfo_all_blocks=1 00:04:53.658 --rc geninfo_unexecuted_blocks=1 00:04:53.658 00:04:53.658 ' 00:04:53.658 14:14:42 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.658 --rc genhtml_branch_coverage=1 00:04:53.658 --rc genhtml_function_coverage=1 00:04:53.658 --rc genhtml_legend=1 00:04:53.658 --rc geninfo_all_blocks=1 00:04:53.658 --rc geninfo_unexecuted_blocks=1 00:04:53.658 00:04:53.658 ' 00:04:53.658 14:14:42 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.658 --rc genhtml_branch_coverage=1 00:04:53.659 --rc genhtml_function_coverage=1 00:04:53.659 --rc genhtml_legend=1 00:04:53.659 --rc geninfo_all_blocks=1 00:04:53.659 --rc geninfo_unexecuted_blocks=1 00:04:53.659 00:04:53.659 ' 00:04:53.659 14:14:42 version -- app/version.sh@17 -- # get_header_version major 00:04:53.659 14:14:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:53.659 14:14:42 version -- app/version.sh@14 -- # cut -f2 00:04:53.659 14:14:42 version -- app/version.sh@14 -- # tr -d '"' 00:04:53.659 14:14:42 version -- app/version.sh@17 -- # major=25 00:04:53.659 14:14:42 version -- app/version.sh@18 -- # get_header_version minor 00:04:53.659 14:14:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:53.659 14:14:42 version -- app/version.sh@14 -- # cut -f2 00:04:53.659 14:14:42 version -- app/version.sh@14 -- # tr -d '"' 00:04:53.659 14:14:42 version -- app/version.sh@18 -- # minor=1 00:04:53.659 14:14:42 version -- app/version.sh@19 -- # get_header_version patch 00:04:53.659 14:14:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:53.659 14:14:42 version -- app/version.sh@14 -- # cut -f2 00:04:53.659 14:14:42 version -- app/version.sh@14 -- # tr -d '"' 00:04:53.659 14:14:42 version -- app/version.sh@19 -- # patch=0 00:04:53.659 14:14:42 version -- app/version.sh@20 -- # get_header_version suffix 00:04:53.659 14:14:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:53.659 14:14:42 version -- app/version.sh@14 -- # cut -f2 00:04:53.659 14:14:42 version -- app/version.sh@14 -- # tr -d '"' 00:04:53.659 14:14:42 version -- app/version.sh@20 -- # suffix=-pre 00:04:53.659 14:14:42 version -- app/version.sh@22 -- # version=25.1 00:04:53.659 14:14:42 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:53.659 14:14:42 version -- app/version.sh@28 -- # version=25.1rc0 00:04:53.659 14:14:42 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:53.659 14:14:42 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:53.659 14:14:42 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:53.659 14:14:42 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:53.659 00:04:53.659 real 0m0.246s 00:04:53.659 user 0m0.147s 00:04:53.659 sys 0m0.142s 00:04:53.659 14:14:42 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.659 14:14:42 version -- common/autotest_common.sh@10 -- # set +x 00:04:53.659 ************************************ 00:04:53.659 END TEST version 00:04:53.659 ************************************ 00:04:53.659 14:14:42 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:53.659 14:14:42 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:53.659 14:14:42 -- spdk/autotest.sh@194 -- # uname -s 00:04:53.659 14:14:42 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:53.659 14:14:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:53.659 14:14:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:53.659 14:14:42 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:53.659 14:14:42 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:53.659 14:14:42 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:53.659 14:14:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:53.659 14:14:42 -- common/autotest_common.sh@10 -- # set +x 00:04:53.659 14:14:42 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:53.659 14:14:42 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:53.659 14:14:42 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:53.659 14:14:42 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:53.659 14:14:42 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:04:53.659 14:14:42 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:04:53.659 14:14:42 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:53.659 14:14:42 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:53.659 14:14:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.659 14:14:42 -- common/autotest_common.sh@10 -- # set +x 00:04:53.659 ************************************ 00:04:53.659 START TEST nvmf_tcp 00:04:53.659 ************************************ 00:04:53.659 14:14:42 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:53.918 * Looking for test storage... 00:04:53.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:53.918 14:14:42 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.918 14:14:42 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.918 14:14:42 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.918 14:14:43 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.918 14:14:43 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:53.918 14:14:43 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.918 14:14:43 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.918 --rc genhtml_branch_coverage=1 00:04:53.918 --rc genhtml_function_coverage=1 00:04:53.918 --rc genhtml_legend=1 00:04:53.918 --rc geninfo_all_blocks=1 00:04:53.918 --rc geninfo_unexecuted_blocks=1 00:04:53.918 00:04:53.918 ' 00:04:53.918 14:14:43 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.918 --rc genhtml_branch_coverage=1 00:04:53.918 --rc genhtml_function_coverage=1 00:04:53.918 --rc genhtml_legend=1 00:04:53.918 --rc geninfo_all_blocks=1 00:04:53.918 --rc geninfo_unexecuted_blocks=1 00:04:53.918 00:04:53.918 ' 00:04:53.918 14:14:43 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.918 --rc genhtml_branch_coverage=1 00:04:53.918 --rc genhtml_function_coverage=1 00:04:53.918 --rc genhtml_legend=1 00:04:53.918 --rc geninfo_all_blocks=1 00:04:53.918 --rc geninfo_unexecuted_blocks=1 00:04:53.918 00:04:53.918 ' 00:04:53.918 14:14:43 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.918 --rc genhtml_branch_coverage=1 00:04:53.918 --rc genhtml_function_coverage=1 00:04:53.918 --rc genhtml_legend=1 00:04:53.918 --rc geninfo_all_blocks=1 00:04:53.918 --rc geninfo_unexecuted_blocks=1 00:04:53.918 00:04:53.918 ' 00:04:53.918 14:14:43 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:53.918 14:14:43 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:53.918 14:14:43 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:53.918 14:14:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:53.918 14:14:43 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.918 14:14:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.918 ************************************ 00:04:53.918 START TEST nvmf_target_core 00:04:53.918 ************************************ 00:04:53.918 14:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:54.177 * Looking for test storage... 00:04:54.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:54.177 14:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.177 14:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.177 14:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.177 14:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.177 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.177 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.177 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.177 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.177 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.177 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.177 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.177 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.177 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.178 --rc genhtml_branch_coverage=1 00:04:54.178 --rc genhtml_function_coverage=1 00:04:54.178 --rc genhtml_legend=1 00:04:54.178 --rc geninfo_all_blocks=1 00:04:54.178 --rc geninfo_unexecuted_blocks=1 00:04:54.178 00:04:54.178 ' 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.178 --rc genhtml_branch_coverage=1 00:04:54.178 --rc genhtml_function_coverage=1 00:04:54.178 --rc genhtml_legend=1 00:04:54.178 --rc geninfo_all_blocks=1 00:04:54.178 --rc geninfo_unexecuted_blocks=1 00:04:54.178 00:04:54.178 ' 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.178 --rc genhtml_branch_coverage=1 00:04:54.178 --rc genhtml_function_coverage=1 00:04:54.178 --rc genhtml_legend=1 00:04:54.178 --rc geninfo_all_blocks=1 00:04:54.178 --rc geninfo_unexecuted_blocks=1 00:04:54.178 00:04:54.178 ' 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.178 --rc genhtml_branch_coverage=1 00:04:54.178 --rc genhtml_function_coverage=1 00:04:54.178 --rc genhtml_legend=1 00:04:54.178 --rc geninfo_all_blocks=1 00:04:54.178 --rc geninfo_unexecuted_blocks=1 00:04:54.178 00:04:54.178 ' 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:54.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:54.178 ************************************ 00:04:54.178 START TEST nvmf_abort 00:04:54.178 ************************************ 00:04:54.178 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:54.439 * Looking for test storage... 00:04:54.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.439 --rc genhtml_branch_coverage=1 00:04:54.439 --rc genhtml_function_coverage=1 00:04:54.439 --rc genhtml_legend=1 00:04:54.439 --rc geninfo_all_blocks=1 00:04:54.439 --rc geninfo_unexecuted_blocks=1 00:04:54.439 00:04:54.439 ' 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.439 --rc genhtml_branch_coverage=1 00:04:54.439 --rc genhtml_function_coverage=1 00:04:54.439 --rc genhtml_legend=1 00:04:54.439 --rc geninfo_all_blocks=1 00:04:54.439 --rc geninfo_unexecuted_blocks=1 00:04:54.439 00:04:54.439 ' 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.439 --rc genhtml_branch_coverage=1 00:04:54.439 --rc genhtml_function_coverage=1 00:04:54.439 --rc genhtml_legend=1 00:04:54.439 --rc geninfo_all_blocks=1 00:04:54.439 --rc geninfo_unexecuted_blocks=1 00:04:54.439 00:04:54.439 ' 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.439 --rc genhtml_branch_coverage=1 00:04:54.439 --rc genhtml_function_coverage=1 00:04:54.439 --rc genhtml_legend=1 00:04:54.439 --rc geninfo_all_blocks=1 00:04:54.439 --rc geninfo_unexecuted_blocks=1 00:04:54.439 00:04:54.439 ' 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:54.439 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:54.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:54.440 14:14:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.121 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:01.121 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:01.121 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:01.121 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:01.121 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:01.121 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:01.121 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:01.121 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:01.121 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:01.121 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:01.121 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:01.121 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:01.121 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:01.122 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:01.122 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:01.122 Found net devices under 0000:86:00.0: cvl_0_0 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:01.122 Found net devices under 0000:86:00.1: cvl_0_1 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:01.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:01.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:05:01.122 00:05:01.122 --- 10.0.0.2 ping statistics --- 00:05:01.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:01.122 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:01.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:01.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:05:01.122 00:05:01.122 --- 10.0.0.1 ping statistics --- 00:05:01.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:01.122 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1284491 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:01.122 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1284491 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1284491 ']' 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.123 [2024-11-17 14:14:49.657090] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:01.123 [2024-11-17 14:14:49.657134] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:01.123 [2024-11-17 14:14:49.735363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:01.123 [2024-11-17 14:14:49.779144] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:01.123 [2024-11-17 14:14:49.779182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:01.123 [2024-11-17 14:14:49.779190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:01.123 [2024-11-17 14:14:49.779196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:01.123 [2024-11-17 14:14:49.779202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:01.123 [2024-11-17 14:14:49.780661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.123 [2024-11-17 14:14:49.780769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.123 [2024-11-17 14:14:49.780770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.123 [2024-11-17 14:14:49.929507] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.123 Malloc0 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.123 Delay0 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.123 14:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.123 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.123 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:01.123 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.123 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.123 [2024-11-17 14:14:50.005821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:01.123 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.123 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:01.123 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.123 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.123 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.123 14:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:01.123 [2024-11-17 14:14:50.102059] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:03.082 Initializing NVMe Controllers 00:05:03.082 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:03.082 controller IO queue size 128 less than required 00:05:03.082 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:03.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:03.082 Initialization complete. Launching workers. 00:05:03.082 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36402 00:05:03.082 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36463, failed to submit 62 00:05:03.082 success 36406, unsuccessful 57, failed 0 00:05:03.082 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:03.082 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.082 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:03.082 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.082 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:03.082 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:03.082 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:03.082 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:03.082 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:03.082 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:03.082 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:03.082 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:03.082 rmmod nvme_tcp 00:05:03.082 rmmod nvme_fabrics 00:05:03.341 rmmod nvme_keyring 00:05:03.341 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:03.341 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:03.341 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:03.341 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1284491 ']' 00:05:03.341 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1284491 00:05:03.341 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1284491 ']' 00:05:03.341 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1284491 00:05:03.341 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:03.341 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.341 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1284491 00:05:03.341 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:03.341 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:03.341 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1284491' 00:05:03.341 killing process with pid 1284491 00:05:03.341 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1284491 00:05:03.341 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1284491 00:05:03.602 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:03.602 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:03.602 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:03.602 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:03.602 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:03.602 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:03.602 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:03.602 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:03.602 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:03.602 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:03.602 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:03.602 14:14:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:05.518 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:05.518 00:05:05.518 real 0m11.315s 00:05:05.518 user 0m11.888s 00:05:05.518 sys 0m5.438s 00:05:05.518 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.518 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:05.518 ************************************ 00:05:05.518 END TEST nvmf_abort 00:05:05.518 ************************************ 00:05:05.518 14:14:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:05.518 14:14:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:05.518 14:14:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.518 14:14:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:05.518 ************************************ 00:05:05.518 START TEST nvmf_ns_hotplug_stress 00:05:05.518 ************************************ 00:05:05.518 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:05.778 * Looking for test storage... 00:05:05.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.778 --rc genhtml_branch_coverage=1 00:05:05.778 --rc genhtml_function_coverage=1 00:05:05.778 --rc genhtml_legend=1 00:05:05.778 --rc geninfo_all_blocks=1 00:05:05.778 --rc geninfo_unexecuted_blocks=1 00:05:05.778 00:05:05.778 ' 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.778 --rc genhtml_branch_coverage=1 00:05:05.778 --rc genhtml_function_coverage=1 00:05:05.778 --rc genhtml_legend=1 00:05:05.778 --rc geninfo_all_blocks=1 00:05:05.778 --rc geninfo_unexecuted_blocks=1 00:05:05.778 00:05:05.778 ' 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.778 --rc genhtml_branch_coverage=1 00:05:05.778 --rc genhtml_function_coverage=1 00:05:05.778 --rc genhtml_legend=1 00:05:05.778 --rc geninfo_all_blocks=1 00:05:05.778 --rc geninfo_unexecuted_blocks=1 00:05:05.778 00:05:05.778 ' 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.778 --rc genhtml_branch_coverage=1 00:05:05.778 --rc genhtml_function_coverage=1 00:05:05.778 --rc genhtml_legend=1 00:05:05.778 --rc geninfo_all_blocks=1 00:05:05.778 --rc geninfo_unexecuted_blocks=1 00:05:05.778 00:05:05.778 ' 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:05.778 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:05.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:05.779 14:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:12.351 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:12.351 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:12.351 Found net devices under 0000:86:00.0: cvl_0_0 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:12.351 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:12.352 Found net devices under 0000:86:00.1: cvl_0_1 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:12.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:12.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:05:12.352 00:05:12.352 --- 10.0.0.2 ping statistics --- 00:05:12.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:12.352 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:12.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:12.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:05:12.352 00:05:12.352 --- 10.0.0.1 ping statistics --- 00:05:12.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:12.352 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1288646 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1288646 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1288646 ']' 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.352 14:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:12.352 [2024-11-17 14:15:01.002936] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:12.352 [2024-11-17 14:15:01.002988] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:12.352 [2024-11-17 14:15:01.083046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.352 [2024-11-17 14:15:01.127721] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:12.352 [2024-11-17 14:15:01.127757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:12.352 [2024-11-17 14:15:01.127765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:12.352 [2024-11-17 14:15:01.127771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:12.352 [2024-11-17 14:15:01.127776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:12.352 [2024-11-17 14:15:01.130371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.352 [2024-11-17 14:15:01.130415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.352 [2024-11-17 14:15:01.130416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.352 14:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.352 14:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:12.352 14:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:12.352 14:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.352 14:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:12.352 14:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:12.352 14:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:12.352 14:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:12.352 [2024-11-17 14:15:01.447863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.352 14:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:12.611 14:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:12.869 [2024-11-17 14:15:01.865318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:12.869 14:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:13.127 14:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:13.127 Malloc0 00:05:13.127 14:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:13.385 Delay0 00:05:13.385 14:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.643 14:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:13.901 NULL1 00:05:13.901 14:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:14.158 14:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1289123 00:05:14.158 14:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:14.158 14:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:14.158 14:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.158 14:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.417 14:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:14.417 14:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:14.675 true 00:05:14.675 14:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:14.675 14:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.933 14:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.933 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:14.933 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:15.192 true 00:05:15.192 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:15.192 14:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.126 Read completed with error (sct=0, sc=11) 00:05:16.384 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.384 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:16.384 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:16.642 true 00:05:16.642 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:16.642 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.900 14:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.159 14:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:17.159 14:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:17.417 true 00:05:17.417 14:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:17.417 14:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.791 14:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.791 14:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:18.791 14:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:18.791 true 00:05:19.048 14:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:19.048 14:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.615 14:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.873 14:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:19.873 14:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:20.131 true 00:05:20.131 14:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:20.131 14:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.389 14:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.647 14:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:20.647 14:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:20.647 true 00:05:20.647 14:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:20.647 14:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.021 14:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.279 14:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:22.279 14:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:22.279 true 00:05:22.279 14:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:22.279 14:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.213 14:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.471 14:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:23.471 14:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:23.471 true 00:05:23.471 14:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:23.471 14:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.729 14:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.986 14:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:23.986 14:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:24.244 true 00:05:24.244 14:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:24.244 14:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.178 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.178 14:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.178 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.178 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.436 14:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:25.436 14:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:25.694 true 00:05:25.694 14:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:25.694 14:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.628 14:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.628 14:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:26.628 14:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:26.886 true 00:05:26.886 14:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:26.886 14:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.143 14:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.401 14:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:27.401 14:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:27.401 true 00:05:27.401 14:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:27.401 14:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.773 14:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.773 14:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:28.773 14:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:29.031 true 00:05:29.031 14:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:29.031 14:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.966 14:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.966 14:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:29.966 14:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:30.223 true 00:05:30.223 14:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:30.223 14:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.479 14:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.479 14:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:30.479 14:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:30.737 true 00:05:30.737 14:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:30.737 14:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.111 14:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.111 14:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:32.111 14:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:32.369 true 00:05:32.369 14:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:32.369 14:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.304 14:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.304 14:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:33.304 14:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:33.561 true 00:05:33.561 14:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:33.561 14:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.561 14:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.819 14:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:33.819 14:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:34.077 true 00:05:34.077 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:34.077 14:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.012 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.270 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:35.270 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:35.529 true 00:05:35.529 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:35.529 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.786 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.786 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:35.786 14:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:36.044 true 00:05:36.044 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:36.044 14:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.418 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.418 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:37.418 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:37.677 true 00:05:37.677 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:37.677 14:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.610 14:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.610 14:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:38.610 14:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:38.867 true 00:05:38.867 14:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:38.867 14:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.125 14:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.383 14:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:39.383 14:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:39.383 true 00:05:39.383 14:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:39.383 14:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.756 14:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.756 14:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:40.756 14:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:41.013 true 00:05:41.013 14:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:41.013 14:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.013 14:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.282 14:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:41.282 14:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:41.544 true 00:05:41.544 14:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:41.544 14:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.478 14:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.737 14:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:42.737 14:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:42.994 true 00:05:42.994 14:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:42.994 14:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.928 14:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.928 14:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:43.928 14:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:44.186 true 00:05:44.186 14:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:44.186 14:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.444 Initializing NVMe Controllers 00:05:44.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:44.444 Controller IO queue size 128, less than required. 00:05:44.444 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:44.444 Controller IO queue size 128, less than required. 00:05:44.444 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:44.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:44.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:44.444 Initialization complete. Launching workers. 00:05:44.444 ======================================================== 00:05:44.444 Latency(us) 00:05:44.444 Device Information : IOPS MiB/s Average min max 00:05:44.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1698.73 0.83 49125.66 2559.55 1139538.95 00:05:44.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16501.12 8.06 7736.95 2179.44 381154.61 00:05:44.444 ======================================================== 00:05:44.444 Total : 18199.85 8.89 11600.06 2179.44 1139538.95 00:05:44.444 00:05:44.444 14:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.702 14:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:44.702 14:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:44.702 true 00:05:44.960 14:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1289123 00:05:44.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1289123) - No such process 00:05:44.960 14:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1289123 00:05:44.960 14:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.960 14:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:45.218 14:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:45.218 14:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:45.218 14:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:45.218 14:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:45.218 14:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:45.475 null0 00:05:45.475 14:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:45.475 14:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:45.475 14:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:45.733 null1 00:05:45.733 14:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:45.733 14:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:45.733 14:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:45.733 null2 00:05:45.733 14:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:45.733 14:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:45.733 14:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:45.991 null3 00:05:45.991 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:45.991 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:45.991 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:46.249 null4 00:05:46.249 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.249 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.249 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:46.249 null5 00:05:46.507 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.507 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.507 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:46.507 null6 00:05:46.507 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.507 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.507 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:46.766 null7 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:46.766 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:46.767 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.767 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:46.767 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:46.767 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:46.767 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:46.767 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:46.767 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:46.767 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:46.767 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.767 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:46.767 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:46.767 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:46.767 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:46.767 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:46.767 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1295117 1295118 1295120 1295123 1295124 1295126 1295128 1295130 00:05:46.767 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:46.767 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:46.767 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.767 14:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:47.025 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.025 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:47.025 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:47.025 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:47.025 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:47.025 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:47.025 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:47.025 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.283 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.541 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:47.799 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:47.799 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:47.799 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:47.799 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.799 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:47.799 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:47.799 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:47.799 14:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.058 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:48.316 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:48.316 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.316 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:48.316 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:48.316 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.316 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:48.316 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:48.316 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:48.574 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.574 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.574 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:48.574 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.574 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:48.575 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:48.833 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.833 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.833 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:48.833 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.833 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.833 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:48.833 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.833 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.833 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:48.833 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.833 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.833 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:48.833 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.833 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.833 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.833 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:48.833 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.833 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:48.833 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.833 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.833 14:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:48.833 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.833 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.834 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:49.091 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:49.091 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:49.091 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:49.091 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:49.091 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.091 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:49.091 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:49.091 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.350 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:49.609 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:49.609 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:49.609 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:49.609 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:49.609 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:49.609 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:49.609 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.609 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:49.609 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.609 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.609 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:49.609 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.609 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.609 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:49.609 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.609 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.609 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:49.610 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.610 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.610 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:49.610 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.610 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.610 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:49.610 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.610 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.610 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:49.610 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.610 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.610 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:49.610 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.610 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.610 14:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:49.868 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:49.868 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:49.868 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.868 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:49.868 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:49.868 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:49.868 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:49.868 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:50.140 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.141 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:50.404 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:50.404 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:50.404 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:50.404 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:50.404 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.404 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:50.404 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:50.404 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.662 14:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:50.920 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.920 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.920 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.920 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.920 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.920 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.920 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.920 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.920 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.920 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.920 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.920 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.920 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.920 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.920 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.920 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.921 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:50.921 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:50.921 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:50.921 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:50.921 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:50.921 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:50.921 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:50.921 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:50.921 rmmod nvme_tcp 00:05:50.921 rmmod nvme_fabrics 00:05:50.921 rmmod nvme_keyring 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1288646 ']' 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1288646 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1288646 ']' 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1288646 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1288646 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1288646' 00:05:51.180 killing process with pid 1288646 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1288646 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1288646 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:51.180 14:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:53.719 00:05:53.719 real 0m47.729s 00:05:53.719 user 3m15.464s 00:05:53.719 sys 0m15.216s 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:53.719 ************************************ 00:05:53.719 END TEST nvmf_ns_hotplug_stress 00:05:53.719 ************************************ 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:53.719 ************************************ 00:05:53.719 START TEST nvmf_delete_subsystem 00:05:53.719 ************************************ 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:53.719 * Looking for test storage... 00:05:53.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.719 --rc genhtml_branch_coverage=1 00:05:53.719 --rc genhtml_function_coverage=1 00:05:53.719 --rc genhtml_legend=1 00:05:53.719 --rc geninfo_all_blocks=1 00:05:53.719 --rc geninfo_unexecuted_blocks=1 00:05:53.719 00:05:53.719 ' 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.719 --rc genhtml_branch_coverage=1 00:05:53.719 --rc genhtml_function_coverage=1 00:05:53.719 --rc genhtml_legend=1 00:05:53.719 --rc geninfo_all_blocks=1 00:05:53.719 --rc geninfo_unexecuted_blocks=1 00:05:53.719 00:05:53.719 ' 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.719 --rc genhtml_branch_coverage=1 00:05:53.719 --rc genhtml_function_coverage=1 00:05:53.719 --rc genhtml_legend=1 00:05:53.719 --rc geninfo_all_blocks=1 00:05:53.719 --rc geninfo_unexecuted_blocks=1 00:05:53.719 00:05:53.719 ' 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.719 --rc genhtml_branch_coverage=1 00:05:53.719 --rc genhtml_function_coverage=1 00:05:53.719 --rc genhtml_legend=1 00:05:53.719 --rc geninfo_all_blocks=1 00:05:53.719 --rc geninfo_unexecuted_blocks=1 00:05:53.719 00:05:53.719 ' 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:53.719 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:53.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:53.720 14:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:00.293 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:00.293 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:00.293 Found net devices under 0000:86:00.0: cvl_0_0 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:00.293 Found net devices under 0000:86:00.1: cvl_0_1 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:00.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:00.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:06:00.293 00:06:00.293 --- 10.0.0.2 ping statistics --- 00:06:00.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:00.293 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:00.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:00.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:06:00.293 00:06:00.293 --- 10.0.0.1 ping statistics --- 00:06:00.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:00.293 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1299511 00:06:00.293 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1299511 00:06:00.294 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1299511 ']' 00:06:00.294 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.294 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:00.294 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.294 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.294 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.294 14:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.294 [2024-11-17 14:15:48.830255] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:00.294 [2024-11-17 14:15:48.830307] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:00.294 [2024-11-17 14:15:48.908907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.294 [2024-11-17 14:15:48.952139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:00.294 [2024-11-17 14:15:48.952173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:00.294 [2024-11-17 14:15:48.952180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:00.294 [2024-11-17 14:15:48.952186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:00.294 [2024-11-17 14:15:48.952191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:00.294 [2024-11-17 14:15:48.953363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.294 [2024-11-17 14:15:48.953370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.294 [2024-11-17 14:15:49.092823] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.294 [2024-11-17 14:15:49.113023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.294 NULL1 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.294 Delay0 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1299661 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:00.294 14:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:00.294 [2024-11-17 14:15:49.224818] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:02.194 14:15:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:02.194 14:15:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.194 14:15:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:02.452 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 starting I/O failed: -6 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 starting I/O failed: -6 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 starting I/O failed: -6 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 starting I/O failed: -6 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 starting I/O failed: -6 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 starting I/O failed: -6 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 starting I/O failed: -6 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 starting I/O failed: -6 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 starting I/O failed: -6 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 starting I/O failed: -6 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 [2024-11-17 14:15:51.429584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a4a0 is same with the state(6) to be set 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 starting I/O failed: -6 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 starting I/O failed: -6 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 starting I/O failed: -6 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 starting I/O failed: -6 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 starting I/O failed: -6 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 starting I/O failed: -6 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 starting I/O failed: -6 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 starting I/O failed: -6 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 starting I/O failed: -6 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 starting I/O failed: -6 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 starting I/O failed: -6 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 [2024-11-17 14:15:51.434559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f87c8000c40 is same with the state(6) to be set 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Write completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.453 Read completed with error (sct=0, sc=8) 00:06:02.454 Write completed with error (sct=0, sc=8) 00:06:02.454 Write completed with error (sct=0, sc=8) 00:06:02.454 Write completed with error (sct=0, sc=8) 00:06:02.454 Write completed with error (sct=0, sc=8) 00:06:02.454 Read completed with error (sct=0, sc=8) 00:06:02.454 Write completed with error (sct=0, sc=8) 00:06:02.454 Read completed with error (sct=0, sc=8) 00:06:02.454 Read completed with error (sct=0, sc=8) 00:06:02.454 Read completed with error (sct=0, sc=8) 00:06:02.454 Write completed with error (sct=0, sc=8) 00:06:02.454 Read completed with error (sct=0, sc=8) 00:06:02.454 Read completed with error (sct=0, sc=8) 00:06:02.454 Write completed with error (sct=0, sc=8) 00:06:02.454 Write completed with error (sct=0, sc=8) 00:06:02.454 Read completed with error (sct=0, sc=8) 00:06:02.454 Write completed with error (sct=0, sc=8) 00:06:02.454 Read completed with error (sct=0, sc=8) 00:06:03.389 [2024-11-17 14:15:52.401536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179b9a0 is same with the state(6) to be set 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 [2024-11-17 14:15:52.432709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a2c0 is same with the state(6) to be set 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 [2024-11-17 14:15:52.433011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a860 is same with the state(6) to be set 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 [2024-11-17 14:15:52.437246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f87c800d680 is same with the state(6) to be set 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Read completed with error (sct=0, sc=8) 00:06:03.389 Write completed with error (sct=0, sc=8) 00:06:03.389 [2024-11-17 14:15:52.437940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f87c800d020 is same with the state(6) to be set 00:06:03.389 Initializing NVMe Controllers 00:06:03.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:03.389 Controller IO queue size 128, less than required. 00:06:03.389 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:03.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:03.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:03.389 Initialization complete. Launching workers. 00:06:03.389 ======================================================== 00:06:03.389 Latency(us) 00:06:03.389 Device Information : IOPS MiB/s Average min max 00:06:03.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.29 0.08 911037.03 293.45 1005926.04 00:06:03.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 167.76 0.08 948042.13 270.62 2002054.13 00:06:03.389 ======================================================== 00:06:03.389 Total : 330.05 0.16 929846.56 270.62 2002054.13 00:06:03.389 00:06:03.389 [2024-11-17 14:15:52.438550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x179b9a0 (9): Bad file descriptor 00:06:03.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:03.389 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.389 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:03.389 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1299661 00:06:03.389 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:03.965 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:03.965 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1299661 00:06:03.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1299661) - No such process 00:06:03.965 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1299661 00:06:03.965 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:03.965 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1299661 00:06:03.965 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:03.965 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.965 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:03.965 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.965 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1299661 00:06:03.965 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:03.965 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:03.965 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:03.966 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:03.966 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:03.966 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.966 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:03.966 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.966 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:03.966 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.966 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:03.966 [2024-11-17 14:15:52.966142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:03.966 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.966 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.966 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.966 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:03.966 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.966 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1300230 00:06:03.966 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:03.966 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:03.966 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1300230 00:06:03.966 14:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:03.966 [2024-11-17 14:15:53.056271] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:04.532 14:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:04.532 14:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1300230 00:06:04.532 14:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:04.790 14:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:04.790 14:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1300230 00:06:04.790 14:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:05.358 14:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:05.358 14:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1300230 00:06:05.358 14:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:05.923 14:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:05.923 14:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1300230 00:06:05.923 14:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:06.489 14:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:06.489 14:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1300230 00:06:06.489 14:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:07.055 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:07.055 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1300230 00:06:07.055 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:07.055 Initializing NVMe Controllers 00:06:07.055 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:07.055 Controller IO queue size 128, less than required. 00:06:07.055 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:07.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:07.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:07.055 Initialization complete. Launching workers. 00:06:07.055 ======================================================== 00:06:07.055 Latency(us) 00:06:07.055 Device Information : IOPS MiB/s Average min max 00:06:07.055 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002920.48 1000152.03 1010093.25 00:06:07.055 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004391.78 1000205.12 1011408.92 00:06:07.055 ======================================================== 00:06:07.055 Total : 256.00 0.12 1003656.13 1000152.03 1011408.92 00:06:07.055 00:06:07.314 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:07.314 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1300230 00:06:07.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1300230) - No such process 00:06:07.314 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1300230 00:06:07.314 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:07.314 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:07.314 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:07.314 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:07.314 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:07.314 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:07.314 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:07.314 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:07.314 rmmod nvme_tcp 00:06:07.573 rmmod nvme_fabrics 00:06:07.573 rmmod nvme_keyring 00:06:07.573 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:07.573 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:07.573 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:07.573 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1299511 ']' 00:06:07.573 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1299511 00:06:07.573 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1299511 ']' 00:06:07.573 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1299511 00:06:07.573 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:07.573 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.573 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1299511 00:06:07.573 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.573 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.573 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1299511' 00:06:07.573 killing process with pid 1299511 00:06:07.573 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1299511 00:06:07.573 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1299511 00:06:07.573 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:07.574 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:07.574 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:07.574 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:07.574 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:07.574 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:07.574 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:07.574 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:07.574 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:07.574 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:07.574 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:07.574 14:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.131 14:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:10.131 00:06:10.131 real 0m16.327s 00:06:10.131 user 0m29.502s 00:06:10.131 sys 0m5.551s 00:06:10.131 14:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.131 14:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:10.131 ************************************ 00:06:10.131 END TEST nvmf_delete_subsystem 00:06:10.131 ************************************ 00:06:10.131 14:15:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:10.131 14:15:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:10.132 14:15:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.132 14:15:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:10.132 ************************************ 00:06:10.132 START TEST nvmf_host_management 00:06:10.132 ************************************ 00:06:10.132 14:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:10.132 * Looking for test storage... 00:06:10.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:10.132 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.132 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.132 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.132 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.132 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.132 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.132 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.132 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.132 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.132 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.132 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.132 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.132 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.132 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.133 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.133 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:10.133 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:10.133 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.133 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.133 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:10.133 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:10.133 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.133 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:10.133 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.133 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:10.133 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:10.133 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.133 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:10.133 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.133 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.133 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.133 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:10.134 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.134 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.134 --rc genhtml_branch_coverage=1 00:06:10.134 --rc genhtml_function_coverage=1 00:06:10.134 --rc genhtml_legend=1 00:06:10.134 --rc geninfo_all_blocks=1 00:06:10.134 --rc geninfo_unexecuted_blocks=1 00:06:10.134 00:06:10.134 ' 00:06:10.134 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.134 --rc genhtml_branch_coverage=1 00:06:10.134 --rc genhtml_function_coverage=1 00:06:10.134 --rc genhtml_legend=1 00:06:10.134 --rc geninfo_all_blocks=1 00:06:10.134 --rc geninfo_unexecuted_blocks=1 00:06:10.134 00:06:10.134 ' 00:06:10.134 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.134 --rc genhtml_branch_coverage=1 00:06:10.134 --rc genhtml_function_coverage=1 00:06:10.134 --rc genhtml_legend=1 00:06:10.134 --rc geninfo_all_blocks=1 00:06:10.134 --rc geninfo_unexecuted_blocks=1 00:06:10.134 00:06:10.134 ' 00:06:10.134 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.134 --rc genhtml_branch_coverage=1 00:06:10.134 --rc genhtml_function_coverage=1 00:06:10.134 --rc genhtml_legend=1 00:06:10.134 --rc geninfo_all_blocks=1 00:06:10.134 --rc geninfo_unexecuted_blocks=1 00:06:10.134 00:06:10.134 ' 00:06:10.134 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:10.135 14:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.892 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:16.892 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:16.892 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:16.892 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:16.893 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:16.893 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:16.893 Found net devices under 0000:86:00.0: cvl_0_0 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:16.893 Found net devices under 0000:86:00.1: cvl_0_1 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:16.893 14:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:16.893 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:16.893 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:16.893 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:16.893 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:16.893 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:16.893 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:16.893 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:16.893 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:16.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:16.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:06:16.893 00:06:16.893 --- 10.0.0.2 ping statistics --- 00:06:16.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.893 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:06:16.893 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:16.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:16.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:06:16.893 00:06:16.893 --- 10.0.0.1 ping statistics --- 00:06:16.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.893 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:06:16.893 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1304467 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1304467 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1304467 ']' 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.894 [2024-11-17 14:16:05.207510] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:16.894 [2024-11-17 14:16:05.207553] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:16.894 [2024-11-17 14:16:05.287692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:16.894 [2024-11-17 14:16:05.330837] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:16.894 [2024-11-17 14:16:05.330875] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:16.894 [2024-11-17 14:16:05.330882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:16.894 [2024-11-17 14:16:05.330888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:16.894 [2024-11-17 14:16:05.330893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:16.894 [2024-11-17 14:16:05.332425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.894 [2024-11-17 14:16:05.332532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.894 [2024-11-17 14:16:05.332639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.894 [2024-11-17 14:16:05.332640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.894 [2024-11-17 14:16:05.469901] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.894 Malloc0 00:06:16.894 [2024-11-17 14:16:05.542138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1304514 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1304514 /var/tmp/bdevperf.sock 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1304514 ']' 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:16.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:16.894 { 00:06:16.894 "params": { 00:06:16.894 "name": "Nvme$subsystem", 00:06:16.894 "trtype": "$TEST_TRANSPORT", 00:06:16.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:16.894 "adrfam": "ipv4", 00:06:16.894 "trsvcid": "$NVMF_PORT", 00:06:16.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:16.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:16.894 "hdgst": ${hdgst:-false}, 00:06:16.894 "ddgst": ${ddgst:-false} 00:06:16.894 }, 00:06:16.894 "method": "bdev_nvme_attach_controller" 00:06:16.894 } 00:06:16.894 EOF 00:06:16.894 )") 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:16.894 14:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:16.894 "params": { 00:06:16.894 "name": "Nvme0", 00:06:16.894 "trtype": "tcp", 00:06:16.894 "traddr": "10.0.0.2", 00:06:16.894 "adrfam": "ipv4", 00:06:16.894 "trsvcid": "4420", 00:06:16.894 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:16.894 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:16.894 "hdgst": false, 00:06:16.894 "ddgst": false 00:06:16.894 }, 00:06:16.894 "method": "bdev_nvme_attach_controller" 00:06:16.894 }' 00:06:16.894 [2024-11-17 14:16:05.639294] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:16.894 [2024-11-17 14:16:05.639337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304514 ] 00:06:16.894 [2024-11-17 14:16:05.717510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.894 [2024-11-17 14:16:05.758960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.894 Running I/O for 10 seconds... 00:06:16.894 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.894 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:16.894 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:16.894 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.894 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.894 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.895 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:16.895 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:16.895 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:16.895 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:16.895 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:16.895 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:16.895 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:16.895 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:16.895 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:16.895 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:16.895 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.895 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.895 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.895 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:16.895 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:16.895 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:17.156 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:17.156 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:17.156 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:17.156 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:17.156 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.156 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.156 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.156 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:06:17.156 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:06:17.156 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:17.156 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:17.156 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:17.156 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:17.156 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.156 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.156 [2024-11-17 14:16:06.360824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.360869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.360877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.360884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.360890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.360897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.360903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.360909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.360916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.360922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.360928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.360939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.360946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.360952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.360958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.360964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.360969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.360975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.360981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.360987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.360993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.360999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.361158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09200 is same with the state(6) to be set 00:06:17.156 [2024-11-17 14:16:06.362036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:17.156 [2024-11-17 14:16:06.362069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.156 [2024-11-17 14:16:06.362079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:17.156 [2024-11-17 14:16:06.362086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.156 [2024-11-17 14:16:06.362095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:17.156 [2024-11-17 14:16:06.362101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.156 [2024-11-17 14:16:06.362109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:17.157 [2024-11-17 14:16:06.362115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.362122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89c500 is same with the state(6) to be set 00:06:17.157 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.157 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:17.157 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.157 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.157 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.157 14:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:17.157 [2024-11-17 14:16:06.374459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x89c500 (9): Bad file descriptor 00:06:17.157 [2024-11-17 14:16:06.374540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.374988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.374996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.375003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.375011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.375018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.375026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.375032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.375040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.375047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.375055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.375061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.375069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.375076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.375084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.375091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.375099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.157 [2024-11-17 14:16:06.375105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.157 [2024-11-17 14:16:06.375114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.158 [2024-11-17 14:16:06.375540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.158 [2024-11-17 14:16:06.375547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.417 [2024-11-17 14:16:06.376510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:17.417 task offset: 98048 on job bdev=Nvme0n1 fails 00:06:17.417 00:06:17.417 Latency(us) 00:06:17.417 [2024-11-17T13:16:06.642Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:17.417 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:17.417 Job: Nvme0n1 ended in about 0.42 seconds with error 00:06:17.417 Verification LBA range: start 0x0 length 0x400 00:06:17.417 Nvme0n1 : 0.42 1840.38 115.02 153.77 0.00 31246.77 1403.33 27468.13 00:06:17.417 [2024-11-17T13:16:06.642Z] =================================================================================================================== 00:06:17.417 [2024-11-17T13:16:06.642Z] Total : 1840.38 115.02 153.77 0.00 31246.77 1403.33 27468.13 00:06:17.417 [2024-11-17 14:16:06.378891] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.417 [2024-11-17 14:16:06.386555] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:18.355 14:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1304514 00:06:18.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1304514) - No such process 00:06:18.356 14:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:18.356 14:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:18.356 14:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:18.356 14:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:18.356 14:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:18.356 14:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:18.356 14:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:18.356 14:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:18.356 { 00:06:18.356 "params": { 00:06:18.356 "name": "Nvme$subsystem", 00:06:18.356 "trtype": "$TEST_TRANSPORT", 00:06:18.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:18.356 "adrfam": "ipv4", 00:06:18.356 "trsvcid": "$NVMF_PORT", 00:06:18.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:18.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:18.356 "hdgst": ${hdgst:-false}, 00:06:18.356 "ddgst": ${ddgst:-false} 00:06:18.356 }, 00:06:18.356 "method": "bdev_nvme_attach_controller" 00:06:18.356 } 00:06:18.356 EOF 00:06:18.356 )") 00:06:18.356 14:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:18.356 14:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:18.356 14:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:18.356 14:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:18.356 "params": { 00:06:18.356 "name": "Nvme0", 00:06:18.356 "trtype": "tcp", 00:06:18.356 "traddr": "10.0.0.2", 00:06:18.356 "adrfam": "ipv4", 00:06:18.356 "trsvcid": "4420", 00:06:18.356 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:18.356 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:18.356 "hdgst": false, 00:06:18.356 "ddgst": false 00:06:18.356 }, 00:06:18.356 "method": "bdev_nvme_attach_controller" 00:06:18.356 }' 00:06:18.356 [2024-11-17 14:16:07.428422] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:18.356 [2024-11-17 14:16:07.428473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304959 ] 00:06:18.356 [2024-11-17 14:16:07.504113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.356 [2024-11-17 14:16:07.544016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.615 Running I/O for 1 seconds... 00:06:19.994 1920.00 IOPS, 120.00 MiB/s 00:06:19.994 Latency(us) 00:06:19.994 [2024-11-17T13:16:09.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:19.994 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:19.994 Verification LBA range: start 0x0 length 0x400 00:06:19.994 Nvme0n1 : 1.01 1967.03 122.94 0.00 0.00 32024.64 6183.18 27582.11 00:06:19.994 [2024-11-17T13:16:09.219Z] =================================================================================================================== 00:06:19.994 [2024-11-17T13:16:09.219Z] Total : 1967.03 122.94 0.00 0.00 32024.64 6183.18 27582.11 00:06:19.994 14:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:19.994 14:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:19.994 14:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:19.994 rmmod nvme_tcp 00:06:19.994 rmmod nvme_fabrics 00:06:19.994 rmmod nvme_keyring 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1304467 ']' 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1304467 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1304467 ']' 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1304467 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1304467 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1304467' 00:06:19.994 killing process with pid 1304467 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1304467 00:06:19.994 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1304467 00:06:20.253 [2024-11-17 14:16:09.280422] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:20.253 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:20.253 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:20.253 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:20.253 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:20.253 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:20.253 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:20.253 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:20.253 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:20.253 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:20.253 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.253 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:20.253 14:16:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:22.160 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:22.160 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:22.160 00:06:22.160 real 0m12.453s 00:06:22.160 user 0m19.824s 00:06:22.160 sys 0m5.592s 00:06:22.160 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.160 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:22.160 ************************************ 00:06:22.160 END TEST nvmf_host_management 00:06:22.160 ************************************ 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:22.420 ************************************ 00:06:22.420 START TEST nvmf_lvol 00:06:22.420 ************************************ 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:22.420 * Looking for test storage... 00:06:22.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:22.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.420 --rc genhtml_branch_coverage=1 00:06:22.420 --rc genhtml_function_coverage=1 00:06:22.420 --rc genhtml_legend=1 00:06:22.420 --rc geninfo_all_blocks=1 00:06:22.420 --rc geninfo_unexecuted_blocks=1 00:06:22.420 00:06:22.420 ' 00:06:22.420 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:22.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.421 --rc genhtml_branch_coverage=1 00:06:22.421 --rc genhtml_function_coverage=1 00:06:22.421 --rc genhtml_legend=1 00:06:22.421 --rc geninfo_all_blocks=1 00:06:22.421 --rc geninfo_unexecuted_blocks=1 00:06:22.421 00:06:22.421 ' 00:06:22.421 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:22.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.421 --rc genhtml_branch_coverage=1 00:06:22.421 --rc genhtml_function_coverage=1 00:06:22.421 --rc genhtml_legend=1 00:06:22.421 --rc geninfo_all_blocks=1 00:06:22.421 --rc geninfo_unexecuted_blocks=1 00:06:22.421 00:06:22.421 ' 00:06:22.421 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:22.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.421 --rc genhtml_branch_coverage=1 00:06:22.421 --rc genhtml_function_coverage=1 00:06:22.421 --rc genhtml_legend=1 00:06:22.421 --rc geninfo_all_blocks=1 00:06:22.421 --rc geninfo_unexecuted_blocks=1 00:06:22.421 00:06:22.421 ' 00:06:22.421 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:22.421 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:22.421 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:22.421 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:22.421 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:22.421 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:22.421 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:22.421 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:22.421 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:22.421 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:22.421 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:22.421 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:22.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:22.681 14:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:29.257 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:29.258 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:29.258 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:29.258 Found net devices under 0000:86:00.0: cvl_0_0 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:29.258 Found net devices under 0000:86:00.1: cvl_0_1 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:29.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:29.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:06:29.258 00:06:29.258 --- 10.0.0.2 ping statistics --- 00:06:29.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.258 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:29.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:29.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:06:29.258 00:06:29.258 --- 10.0.0.1 ping statistics --- 00:06:29.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.258 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:29.258 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1308751 00:06:29.259 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1308751 00:06:29.259 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:29.259 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1308751 ']' 00:06:29.259 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.259 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.259 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.259 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.259 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:29.259 [2024-11-17 14:16:17.717185] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:29.259 [2024-11-17 14:16:17.717229] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.259 [2024-11-17 14:16:17.797226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.259 [2024-11-17 14:16:17.839128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:29.259 [2024-11-17 14:16:17.839165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:29.259 [2024-11-17 14:16:17.839173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:29.259 [2024-11-17 14:16:17.839179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:29.259 [2024-11-17 14:16:17.839184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:29.259 [2024-11-17 14:16:17.840551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.259 [2024-11-17 14:16:17.840658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.259 [2024-11-17 14:16:17.840659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.259 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.259 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:29.259 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:29.259 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:29.259 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:29.259 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:29.259 14:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:29.259 [2024-11-17 14:16:18.141920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.259 14:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:29.259 14:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:29.259 14:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:29.518 14:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:29.518 14:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:29.778 14:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:30.037 14:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b3742de3-c192-4bff-b0aa-99bfaa31d628 00:06:30.037 14:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b3742de3-c192-4bff-b0aa-99bfaa31d628 lvol 20 00:06:30.037 14:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=11d96065-2a5c-48bb-aaf1-57394ea4de16 00:06:30.037 14:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:30.296 14:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 11d96065-2a5c-48bb-aaf1-57394ea4de16 00:06:30.556 14:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:30.815 [2024-11-17 14:16:19.814463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:30.815 14:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:31.075 14:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1309247 00:06:31.075 14:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:31.075 14:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:32.014 14:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 11d96065-2a5c-48bb-aaf1-57394ea4de16 MY_SNAPSHOT 00:06:32.274 14:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1f20705c-6aeb-4402-9ee5-6b37c42619ac 00:06:32.274 14:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 11d96065-2a5c-48bb-aaf1-57394ea4de16 30 00:06:32.534 14:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1f20705c-6aeb-4402-9ee5-6b37c42619ac MY_CLONE 00:06:32.794 14:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3399429c-96ad-4c93-8d04-2acf4eef4f6e 00:06:32.794 14:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3399429c-96ad-4c93-8d04-2acf4eef4f6e 00:06:33.362 14:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1309247 00:06:41.489 Initializing NVMe Controllers 00:06:41.489 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:41.489 Controller IO queue size 128, less than required. 00:06:41.489 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:41.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:41.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:41.489 Initialization complete. Launching workers. 00:06:41.489 ======================================================== 00:06:41.489 Latency(us) 00:06:41.489 Device Information : IOPS MiB/s Average min max 00:06:41.489 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11865.10 46.35 10791.46 1832.70 40449.51 00:06:41.489 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11997.20 46.86 10668.69 3666.69 105336.91 00:06:41.489 ======================================================== 00:06:41.489 Total : 23862.30 93.21 10729.74 1832.70 105336.91 00:06:41.489 00:06:41.489 14:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:41.489 14:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 11d96065-2a5c-48bb-aaf1-57394ea4de16 00:06:41.747 14:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b3742de3-c192-4bff-b0aa-99bfaa31d628 00:06:41.747 14:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:41.747 14:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:41.747 14:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:41.747 14:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:41.747 14:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:41.747 14:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:41.747 14:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:41.747 14:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:41.747 14:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:41.747 rmmod nvme_tcp 00:06:42.007 rmmod nvme_fabrics 00:06:42.007 rmmod nvme_keyring 00:06:42.007 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:42.007 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:42.007 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:42.007 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1308751 ']' 00:06:42.007 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1308751 00:06:42.007 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1308751 ']' 00:06:42.007 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1308751 00:06:42.007 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:42.007 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.007 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1308751 00:06:42.007 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.007 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.007 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1308751' 00:06:42.007 killing process with pid 1308751 00:06:42.007 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1308751 00:06:42.007 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1308751 00:06:42.267 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:42.267 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:42.267 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:42.267 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:42.267 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:42.267 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:42.267 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:42.267 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:42.267 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:42.267 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.267 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:42.267 14:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.176 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:44.176 00:06:44.176 real 0m21.893s 00:06:44.176 user 1m2.855s 00:06:44.176 sys 0m7.567s 00:06:44.176 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.176 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:44.176 ************************************ 00:06:44.176 END TEST nvmf_lvol 00:06:44.176 ************************************ 00:06:44.176 14:16:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:44.176 14:16:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:44.176 14:16:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.176 14:16:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:44.436 ************************************ 00:06:44.436 START TEST nvmf_lvs_grow 00:06:44.436 ************************************ 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:44.437 * Looking for test storage... 00:06:44.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:44.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.437 --rc genhtml_branch_coverage=1 00:06:44.437 --rc genhtml_function_coverage=1 00:06:44.437 --rc genhtml_legend=1 00:06:44.437 --rc geninfo_all_blocks=1 00:06:44.437 --rc geninfo_unexecuted_blocks=1 00:06:44.437 00:06:44.437 ' 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:44.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.437 --rc genhtml_branch_coverage=1 00:06:44.437 --rc genhtml_function_coverage=1 00:06:44.437 --rc genhtml_legend=1 00:06:44.437 --rc geninfo_all_blocks=1 00:06:44.437 --rc geninfo_unexecuted_blocks=1 00:06:44.437 00:06:44.437 ' 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:44.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.437 --rc genhtml_branch_coverage=1 00:06:44.437 --rc genhtml_function_coverage=1 00:06:44.437 --rc genhtml_legend=1 00:06:44.437 --rc geninfo_all_blocks=1 00:06:44.437 --rc geninfo_unexecuted_blocks=1 00:06:44.437 00:06:44.437 ' 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:44.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.437 --rc genhtml_branch_coverage=1 00:06:44.437 --rc genhtml_function_coverage=1 00:06:44.437 --rc genhtml_legend=1 00:06:44.437 --rc geninfo_all_blocks=1 00:06:44.437 --rc geninfo_unexecuted_blocks=1 00:06:44.437 00:06:44.437 ' 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.437 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:44.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:44.438 14:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:51.017 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:51.017 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:51.017 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:51.017 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:51.017 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:51.017 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:51.017 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:51.017 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:51.017 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:51.017 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:51.017 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:51.017 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:51.017 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:51.017 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:51.017 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:51.017 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:51.017 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:51.017 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:51.017 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:51.017 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:51.018 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:51.018 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:51.018 Found net devices under 0000:86:00.0: cvl_0_0 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:51.018 Found net devices under 0000:86:00.1: cvl_0_1 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:51.018 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:51.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:06:51.019 00:06:51.019 --- 10.0.0.2 ping statistics --- 00:06:51.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.019 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:51.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:06:51.019 00:06:51.019 --- 10.0.0.1 ping statistics --- 00:06:51.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.019 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1314632 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1314632 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1314632 ']' 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:51.019 [2024-11-17 14:16:39.720689] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:51.019 [2024-11-17 14:16:39.720739] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.019 [2024-11-17 14:16:39.800084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.019 [2024-11-17 14:16:39.841607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.019 [2024-11-17 14:16:39.841642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.019 [2024-11-17 14:16:39.841650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.019 [2024-11-17 14:16:39.841655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.019 [2024-11-17 14:16:39.841660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.019 [2024-11-17 14:16:39.842206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.019 14:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:51.019 [2024-11-17 14:16:40.150264] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.019 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:51.019 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.019 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.019 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:51.019 ************************************ 00:06:51.019 START TEST lvs_grow_clean 00:06:51.019 ************************************ 00:06:51.019 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:51.019 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:51.019 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:51.019 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:51.019 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:51.019 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:51.019 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:51.019 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:51.019 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:51.019 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:51.279 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:51.279 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:51.538 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=dad40c17-d59f-4a62-a27b-a4ad1821dff0 00:06:51.538 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dad40c17-d59f-4a62-a27b-a4ad1821dff0 00:06:51.538 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:51.798 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:51.798 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:51.798 14:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dad40c17-d59f-4a62-a27b-a4ad1821dff0 lvol 150 00:06:52.057 14:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=28867540-5f45-4dd3-9296-888fd7ee4c49 00:06:52.057 14:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:52.057 14:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:52.057 [2024-11-17 14:16:41.212133] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:52.057 [2024-11-17 14:16:41.212190] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:52.057 true 00:06:52.057 14:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dad40c17-d59f-4a62-a27b-a4ad1821dff0 00:06:52.057 14:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:52.316 14:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:52.316 14:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:52.575 14:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 28867540-5f45-4dd3-9296-888fd7ee4c49 00:06:52.835 14:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:52.835 [2024-11-17 14:16:41.962384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:52.835 14:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:53.094 14:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:53.094 14:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1315052 00:06:53.094 14:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:53.094 14:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1315052 /var/tmp/bdevperf.sock 00:06:53.094 14:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1315052 ']' 00:06:53.094 14:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:53.094 14:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.094 14:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:53.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:53.094 14:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.094 14:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:53.094 [2024-11-17 14:16:42.207588] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:53.094 [2024-11-17 14:16:42.207636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1315052 ] 00:06:53.094 [2024-11-17 14:16:42.283827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.354 [2024-11-17 14:16:42.326513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.354 14:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.354 14:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:53.354 14:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:53.613 Nvme0n1 00:06:53.613 14:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:53.872 [ 00:06:53.872 { 00:06:53.872 "name": "Nvme0n1", 00:06:53.872 "aliases": [ 00:06:53.872 "28867540-5f45-4dd3-9296-888fd7ee4c49" 00:06:53.872 ], 00:06:53.872 "product_name": "NVMe disk", 00:06:53.872 "block_size": 4096, 00:06:53.872 "num_blocks": 38912, 00:06:53.872 "uuid": "28867540-5f45-4dd3-9296-888fd7ee4c49", 00:06:53.872 "numa_id": 1, 00:06:53.872 "assigned_rate_limits": { 00:06:53.872 "rw_ios_per_sec": 0, 00:06:53.872 "rw_mbytes_per_sec": 0, 00:06:53.872 "r_mbytes_per_sec": 0, 00:06:53.872 "w_mbytes_per_sec": 0 00:06:53.872 }, 00:06:53.872 "claimed": false, 00:06:53.872 "zoned": false, 00:06:53.872 "supported_io_types": { 00:06:53.872 "read": true, 00:06:53.872 "write": true, 00:06:53.872 "unmap": true, 00:06:53.872 "flush": true, 00:06:53.872 "reset": true, 00:06:53.872 "nvme_admin": true, 00:06:53.872 "nvme_io": true, 00:06:53.872 "nvme_io_md": false, 00:06:53.872 "write_zeroes": true, 00:06:53.872 "zcopy": false, 00:06:53.872 "get_zone_info": false, 00:06:53.872 "zone_management": false, 00:06:53.872 "zone_append": false, 00:06:53.872 "compare": true, 00:06:53.872 "compare_and_write": true, 00:06:53.872 "abort": true, 00:06:53.872 "seek_hole": false, 00:06:53.872 "seek_data": false, 00:06:53.873 "copy": true, 00:06:53.873 "nvme_iov_md": false 00:06:53.873 }, 00:06:53.873 "memory_domains": [ 00:06:53.873 { 00:06:53.873 "dma_device_id": "system", 00:06:53.873 "dma_device_type": 1 00:06:53.873 } 00:06:53.873 ], 00:06:53.873 "driver_specific": { 00:06:53.873 "nvme": [ 00:06:53.873 { 00:06:53.873 "trid": { 00:06:53.873 "trtype": "TCP", 00:06:53.873 "adrfam": "IPv4", 00:06:53.873 "traddr": "10.0.0.2", 00:06:53.873 "trsvcid": "4420", 00:06:53.873 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:53.873 }, 00:06:53.873 "ctrlr_data": { 00:06:53.873 "cntlid": 1, 00:06:53.873 "vendor_id": "0x8086", 00:06:53.873 "model_number": "SPDK bdev Controller", 00:06:53.873 "serial_number": "SPDK0", 00:06:53.873 "firmware_revision": "25.01", 00:06:53.873 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:53.873 "oacs": { 00:06:53.873 "security": 0, 00:06:53.873 "format": 0, 00:06:53.873 "firmware": 0, 00:06:53.873 "ns_manage": 0 00:06:53.873 }, 00:06:53.873 "multi_ctrlr": true, 00:06:53.873 "ana_reporting": false 00:06:53.873 }, 00:06:53.873 "vs": { 00:06:53.873 "nvme_version": "1.3" 00:06:53.873 }, 00:06:53.873 "ns_data": { 00:06:53.873 "id": 1, 00:06:53.873 "can_share": true 00:06:53.873 } 00:06:53.873 } 00:06:53.873 ], 00:06:53.873 "mp_policy": "active_passive" 00:06:53.873 } 00:06:53.873 } 00:06:53.873 ] 00:06:53.873 14:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1315142 00:06:53.873 14:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:53.873 14:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:53.873 Running I/O for 10 seconds... 00:06:54.811 Latency(us) 00:06:54.811 [2024-11-17T13:16:44.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:54.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:54.811 Nvme0n1 : 1.00 22714.00 88.73 0.00 0.00 0.00 0.00 0.00 00:06:54.811 [2024-11-17T13:16:44.036Z] =================================================================================================================== 00:06:54.811 [2024-11-17T13:16:44.036Z] Total : 22714.00 88.73 0.00 0.00 0.00 0.00 0.00 00:06:54.811 00:06:55.749 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dad40c17-d59f-4a62-a27b-a4ad1821dff0 00:06:56.008 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.008 Nvme0n1 : 2.00 22804.50 89.08 0.00 0.00 0.00 0.00 0.00 00:06:56.008 [2024-11-17T13:16:45.233Z] =================================================================================================================== 00:06:56.008 [2024-11-17T13:16:45.233Z] Total : 22804.50 89.08 0.00 0.00 0.00 0.00 0.00 00:06:56.008 00:06:56.008 true 00:06:56.008 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dad40c17-d59f-4a62-a27b-a4ad1821dff0 00:06:56.008 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:56.268 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:56.268 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:56.268 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1315142 00:06:56.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.836 Nvme0n1 : 3.00 22814.33 89.12 0.00 0.00 0.00 0.00 0.00 00:06:56.836 [2024-11-17T13:16:46.061Z] =================================================================================================================== 00:06:56.836 [2024-11-17T13:16:46.061Z] Total : 22814.33 89.12 0.00 0.00 0.00 0.00 0.00 00:06:56.836 00:06:58.215 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.215 Nvme0n1 : 4.00 22895.25 89.43 0.00 0.00 0.00 0.00 0.00 00:06:58.215 [2024-11-17T13:16:47.440Z] =================================================================================================================== 00:06:58.215 [2024-11-17T13:16:47.440Z] Total : 22895.25 89.43 0.00 0.00 0.00 0.00 0.00 00:06:58.215 00:06:59.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.153 Nvme0n1 : 5.00 22945.00 89.63 0.00 0.00 0.00 0.00 0.00 00:06:59.153 [2024-11-17T13:16:48.378Z] =================================================================================================================== 00:06:59.153 [2024-11-17T13:16:48.378Z] Total : 22945.00 89.63 0.00 0.00 0.00 0.00 0.00 00:06:59.153 00:07:00.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:00.090 Nvme0n1 : 6.00 22979.83 89.76 0.00 0.00 0.00 0.00 0.00 00:07:00.090 [2024-11-17T13:16:49.315Z] =================================================================================================================== 00:07:00.090 [2024-11-17T13:16:49.315Z] Total : 22979.83 89.76 0.00 0.00 0.00 0.00 0.00 00:07:00.090 00:07:01.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.029 Nvme0n1 : 7.00 23004.29 89.86 0.00 0.00 0.00 0.00 0.00 00:07:01.029 [2024-11-17T13:16:50.254Z] =================================================================================================================== 00:07:01.029 [2024-11-17T13:16:50.254Z] Total : 23004.29 89.86 0.00 0.00 0.00 0.00 0.00 00:07:01.029 00:07:01.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.967 Nvme0n1 : 8.00 22973.38 89.74 0.00 0.00 0.00 0.00 0.00 00:07:01.967 [2024-11-17T13:16:51.192Z] =================================================================================================================== 00:07:01.967 [2024-11-17T13:16:51.192Z] Total : 22973.38 89.74 0.00 0.00 0.00 0.00 0.00 00:07:01.967 00:07:02.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.905 Nvme0n1 : 9.00 22999.89 89.84 0.00 0.00 0.00 0.00 0.00 00:07:02.905 [2024-11-17T13:16:52.130Z] =================================================================================================================== 00:07:02.905 [2024-11-17T13:16:52.130Z] Total : 22999.89 89.84 0.00 0.00 0.00 0.00 0.00 00:07:02.905 00:07:03.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.843 Nvme0n1 : 10.00 23016.00 89.91 0.00 0.00 0.00 0.00 0.00 00:07:03.843 [2024-11-17T13:16:53.068Z] =================================================================================================================== 00:07:03.843 [2024-11-17T13:16:53.068Z] Total : 23016.00 89.91 0.00 0.00 0.00 0.00 0.00 00:07:03.843 00:07:03.843 00:07:03.843 Latency(us) 00:07:03.843 [2024-11-17T13:16:53.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:03.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.843 Nvme0n1 : 10.01 23016.21 89.91 0.00 0.00 5557.85 1438.94 10143.83 00:07:03.843 [2024-11-17T13:16:53.068Z] =================================================================================================================== 00:07:03.843 [2024-11-17T13:16:53.068Z] Total : 23016.21 89.91 0.00 0.00 5557.85 1438.94 10143.83 00:07:03.843 { 00:07:03.843 "results": [ 00:07:03.843 { 00:07:03.843 "job": "Nvme0n1", 00:07:03.844 "core_mask": "0x2", 00:07:03.844 "workload": "randwrite", 00:07:03.844 "status": "finished", 00:07:03.844 "queue_depth": 128, 00:07:03.844 "io_size": 4096, 00:07:03.844 "runtime": 10.005468, 00:07:03.844 "iops": 23016.214733783567, 00:07:03.844 "mibps": 89.90708880384206, 00:07:03.844 "io_failed": 0, 00:07:03.844 "io_timeout": 0, 00:07:03.844 "avg_latency_us": 5557.851027824516, 00:07:03.844 "min_latency_us": 1438.942608695652, 00:07:03.844 "max_latency_us": 10143.83304347826 00:07:03.844 } 00:07:03.844 ], 00:07:03.844 "core_count": 1 00:07:03.844 } 00:07:03.844 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1315052 00:07:03.844 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1315052 ']' 00:07:03.844 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1315052 00:07:03.844 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:03.844 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.844 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1315052 00:07:04.104 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:04.104 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:04.104 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1315052' 00:07:04.104 killing process with pid 1315052 00:07:04.104 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1315052 00:07:04.104 Received shutdown signal, test time was about 10.000000 seconds 00:07:04.104 00:07:04.104 Latency(us) 00:07:04.104 [2024-11-17T13:16:53.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:04.104 [2024-11-17T13:16:53.329Z] =================================================================================================================== 00:07:04.104 [2024-11-17T13:16:53.329Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:04.104 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1315052 00:07:04.104 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:04.364 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:04.623 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dad40c17-d59f-4a62-a27b-a4ad1821dff0 00:07:04.623 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:04.882 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:04.882 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:04.882 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:04.882 [2024-11-17 14:16:54.031022] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:04.882 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dad40c17-d59f-4a62-a27b-a4ad1821dff0 00:07:04.882 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:04.882 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dad40c17-d59f-4a62-a27b-a4ad1821dff0 00:07:04.882 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.882 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:04.882 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.882 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:04.882 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.882 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:04.882 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.882 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:04.882 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dad40c17-d59f-4a62-a27b-a4ad1821dff0 00:07:05.142 request: 00:07:05.142 { 00:07:05.142 "uuid": "dad40c17-d59f-4a62-a27b-a4ad1821dff0", 00:07:05.142 "method": "bdev_lvol_get_lvstores", 00:07:05.142 "req_id": 1 00:07:05.142 } 00:07:05.142 Got JSON-RPC error response 00:07:05.142 response: 00:07:05.142 { 00:07:05.142 "code": -19, 00:07:05.142 "message": "No such device" 00:07:05.142 } 00:07:05.142 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:05.142 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:05.142 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:05.142 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:05.142 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:05.404 aio_bdev 00:07:05.404 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 28867540-5f45-4dd3-9296-888fd7ee4c49 00:07:05.404 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=28867540-5f45-4dd3-9296-888fd7ee4c49 00:07:05.404 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:05.404 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:05.404 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:05.404 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:05.404 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:05.404 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 28867540-5f45-4dd3-9296-888fd7ee4c49 -t 2000 00:07:05.663 [ 00:07:05.663 { 00:07:05.663 "name": "28867540-5f45-4dd3-9296-888fd7ee4c49", 00:07:05.663 "aliases": [ 00:07:05.663 "lvs/lvol" 00:07:05.663 ], 00:07:05.663 "product_name": "Logical Volume", 00:07:05.663 "block_size": 4096, 00:07:05.663 "num_blocks": 38912, 00:07:05.663 "uuid": "28867540-5f45-4dd3-9296-888fd7ee4c49", 00:07:05.663 "assigned_rate_limits": { 00:07:05.663 "rw_ios_per_sec": 0, 00:07:05.663 "rw_mbytes_per_sec": 0, 00:07:05.663 "r_mbytes_per_sec": 0, 00:07:05.663 "w_mbytes_per_sec": 0 00:07:05.663 }, 00:07:05.663 "claimed": false, 00:07:05.663 "zoned": false, 00:07:05.663 "supported_io_types": { 00:07:05.663 "read": true, 00:07:05.663 "write": true, 00:07:05.663 "unmap": true, 00:07:05.663 "flush": false, 00:07:05.663 "reset": true, 00:07:05.663 "nvme_admin": false, 00:07:05.663 "nvme_io": false, 00:07:05.663 "nvme_io_md": false, 00:07:05.663 "write_zeroes": true, 00:07:05.663 "zcopy": false, 00:07:05.663 "get_zone_info": false, 00:07:05.663 "zone_management": false, 00:07:05.663 "zone_append": false, 00:07:05.663 "compare": false, 00:07:05.663 "compare_and_write": false, 00:07:05.663 "abort": false, 00:07:05.663 "seek_hole": true, 00:07:05.663 "seek_data": true, 00:07:05.663 "copy": false, 00:07:05.663 "nvme_iov_md": false 00:07:05.663 }, 00:07:05.663 "driver_specific": { 00:07:05.663 "lvol": { 00:07:05.663 "lvol_store_uuid": "dad40c17-d59f-4a62-a27b-a4ad1821dff0", 00:07:05.663 "base_bdev": "aio_bdev", 00:07:05.663 "thin_provision": false, 00:07:05.663 "num_allocated_clusters": 38, 00:07:05.663 "snapshot": false, 00:07:05.663 "clone": false, 00:07:05.663 "esnap_clone": false 00:07:05.663 } 00:07:05.663 } 00:07:05.663 } 00:07:05.663 ] 00:07:05.663 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:05.663 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dad40c17-d59f-4a62-a27b-a4ad1821dff0 00:07:05.663 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:05.922 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:05.922 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dad40c17-d59f-4a62-a27b-a4ad1821dff0 00:07:05.922 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:06.182 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:06.182 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 28867540-5f45-4dd3-9296-888fd7ee4c49 00:07:06.441 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dad40c17-d59f-4a62-a27b-a4ad1821dff0 00:07:06.441 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:06.700 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:06.700 00:07:06.700 real 0m15.627s 00:07:06.700 user 0m15.091s 00:07:06.700 sys 0m1.556s 00:07:06.700 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.700 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:06.700 ************************************ 00:07:06.700 END TEST lvs_grow_clean 00:07:06.700 ************************************ 00:07:06.700 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:06.700 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.700 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.700 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:06.700 ************************************ 00:07:06.700 START TEST lvs_grow_dirty 00:07:06.700 ************************************ 00:07:06.700 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:06.700 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:06.700 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:06.700 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:06.700 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:06.700 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:06.700 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:06.700 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:06.700 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:06.960 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:06.960 14:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:06.960 14:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:07.219 14:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0284ef23-3cf2-45ae-a29d-5459de44e285 00:07:07.219 14:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0284ef23-3cf2-45ae-a29d-5459de44e285 00:07:07.219 14:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:07.479 14:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:07.479 14:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:07.479 14:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0284ef23-3cf2-45ae-a29d-5459de44e285 lvol 150 00:07:07.738 14:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6fa751ad-836f-4a4c-897d-7c14c8188581 00:07:07.738 14:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:07.738 14:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:07.738 [2024-11-17 14:16:56.898898] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:07.738 [2024-11-17 14:16:56.898950] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:07.738 true 00:07:07.738 14:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0284ef23-3cf2-45ae-a29d-5459de44e285 00:07:07.738 14:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:07.997 14:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:07.997 14:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:08.256 14:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6fa751ad-836f-4a4c-897d-7c14c8188581 00:07:08.516 14:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:08.516 [2024-11-17 14:16:57.661175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.516 14:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:08.775 14:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:08.775 14:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1317737 00:07:08.775 14:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:08.775 14:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1317737 /var/tmp/bdevperf.sock 00:07:08.775 14:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1317737 ']' 00:07:08.775 14:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:08.775 14:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.775 14:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:08.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:08.775 14:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.775 14:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:08.775 [2024-11-17 14:16:57.880329] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:08.775 [2024-11-17 14:16:57.880378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1317737 ] 00:07:08.775 [2024-11-17 14:16:57.953962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.775 [2024-11-17 14:16:57.994392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.034 14:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.034 14:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:09.035 14:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:09.293 Nvme0n1 00:07:09.294 14:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:09.553 [ 00:07:09.553 { 00:07:09.553 "name": "Nvme0n1", 00:07:09.553 "aliases": [ 00:07:09.553 "6fa751ad-836f-4a4c-897d-7c14c8188581" 00:07:09.553 ], 00:07:09.553 "product_name": "NVMe disk", 00:07:09.553 "block_size": 4096, 00:07:09.553 "num_blocks": 38912, 00:07:09.553 "uuid": "6fa751ad-836f-4a4c-897d-7c14c8188581", 00:07:09.553 "numa_id": 1, 00:07:09.553 "assigned_rate_limits": { 00:07:09.553 "rw_ios_per_sec": 0, 00:07:09.553 "rw_mbytes_per_sec": 0, 00:07:09.553 "r_mbytes_per_sec": 0, 00:07:09.553 "w_mbytes_per_sec": 0 00:07:09.553 }, 00:07:09.553 "claimed": false, 00:07:09.553 "zoned": false, 00:07:09.553 "supported_io_types": { 00:07:09.553 "read": true, 00:07:09.553 "write": true, 00:07:09.553 "unmap": true, 00:07:09.553 "flush": true, 00:07:09.553 "reset": true, 00:07:09.553 "nvme_admin": true, 00:07:09.553 "nvme_io": true, 00:07:09.553 "nvme_io_md": false, 00:07:09.553 "write_zeroes": true, 00:07:09.553 "zcopy": false, 00:07:09.553 "get_zone_info": false, 00:07:09.553 "zone_management": false, 00:07:09.553 "zone_append": false, 00:07:09.553 "compare": true, 00:07:09.553 "compare_and_write": true, 00:07:09.553 "abort": true, 00:07:09.553 "seek_hole": false, 00:07:09.553 "seek_data": false, 00:07:09.553 "copy": true, 00:07:09.553 "nvme_iov_md": false 00:07:09.553 }, 00:07:09.553 "memory_domains": [ 00:07:09.553 { 00:07:09.553 "dma_device_id": "system", 00:07:09.553 "dma_device_type": 1 00:07:09.553 } 00:07:09.553 ], 00:07:09.553 "driver_specific": { 00:07:09.553 "nvme": [ 00:07:09.553 { 00:07:09.553 "trid": { 00:07:09.553 "trtype": "TCP", 00:07:09.553 "adrfam": "IPv4", 00:07:09.553 "traddr": "10.0.0.2", 00:07:09.553 "trsvcid": "4420", 00:07:09.553 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:09.553 }, 00:07:09.553 "ctrlr_data": { 00:07:09.553 "cntlid": 1, 00:07:09.553 "vendor_id": "0x8086", 00:07:09.553 "model_number": "SPDK bdev Controller", 00:07:09.553 "serial_number": "SPDK0", 00:07:09.553 "firmware_revision": "25.01", 00:07:09.553 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:09.553 "oacs": { 00:07:09.553 "security": 0, 00:07:09.553 "format": 0, 00:07:09.553 "firmware": 0, 00:07:09.553 "ns_manage": 0 00:07:09.553 }, 00:07:09.553 "multi_ctrlr": true, 00:07:09.553 "ana_reporting": false 00:07:09.553 }, 00:07:09.553 "vs": { 00:07:09.553 "nvme_version": "1.3" 00:07:09.553 }, 00:07:09.553 "ns_data": { 00:07:09.553 "id": 1, 00:07:09.553 "can_share": true 00:07:09.553 } 00:07:09.553 } 00:07:09.553 ], 00:07:09.553 "mp_policy": "active_passive" 00:07:09.553 } 00:07:09.553 } 00:07:09.553 ] 00:07:09.553 14:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1317761 00:07:09.553 14:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:09.553 14:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:09.553 Running I/O for 10 seconds... 00:07:10.496 Latency(us) 00:07:10.496 [2024-11-17T13:16:59.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.496 Nvme0n1 : 1.00 22805.00 89.08 0.00 0.00 0.00 0.00 0.00 00:07:10.496 [2024-11-17T13:16:59.721Z] =================================================================================================================== 00:07:10.496 [2024-11-17T13:16:59.721Z] Total : 22805.00 89.08 0.00 0.00 0.00 0.00 0.00 00:07:10.496 00:07:11.433 14:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0284ef23-3cf2-45ae-a29d-5459de44e285 00:07:11.692 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.692 Nvme0n1 : 2.00 22707.00 88.70 0.00 0.00 0.00 0.00 0.00 00:07:11.692 [2024-11-17T13:17:00.917Z] =================================================================================================================== 00:07:11.692 [2024-11-17T13:17:00.917Z] Total : 22707.00 88.70 0.00 0.00 0.00 0.00 0.00 00:07:11.692 00:07:11.692 true 00:07:11.692 14:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0284ef23-3cf2-45ae-a29d-5459de44e285 00:07:11.692 14:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:11.952 14:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:11.952 14:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:11.952 14:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1317761 00:07:12.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.521 Nvme0n1 : 3.00 22762.00 88.91 0.00 0.00 0.00 0.00 0.00 00:07:12.521 [2024-11-17T13:17:01.746Z] =================================================================================================================== 00:07:12.521 [2024-11-17T13:17:01.746Z] Total : 22762.00 88.91 0.00 0.00 0.00 0.00 0.00 00:07:12.521 00:07:13.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.900 Nvme0n1 : 4.00 22850.75 89.26 0.00 0.00 0.00 0.00 0.00 00:07:13.900 [2024-11-17T13:17:03.125Z] =================================================================================================================== 00:07:13.900 [2024-11-17T13:17:03.125Z] Total : 22850.75 89.26 0.00 0.00 0.00 0.00 0.00 00:07:13.900 00:07:14.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.836 Nvme0n1 : 5.00 22906.60 89.48 0.00 0.00 0.00 0.00 0.00 00:07:14.836 [2024-11-17T13:17:04.061Z] =================================================================================================================== 00:07:14.836 [2024-11-17T13:17:04.061Z] Total : 22906.60 89.48 0.00 0.00 0.00 0.00 0.00 00:07:14.836 00:07:15.773 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.773 Nvme0n1 : 6.00 22956.00 89.67 0.00 0.00 0.00 0.00 0.00 00:07:15.773 [2024-11-17T13:17:04.998Z] =================================================================================================================== 00:07:15.773 [2024-11-17T13:17:04.998Z] Total : 22956.00 89.67 0.00 0.00 0.00 0.00 0.00 00:07:15.773 00:07:16.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.710 Nvme0n1 : 7.00 22989.29 89.80 0.00 0.00 0.00 0.00 0.00 00:07:16.710 [2024-11-17T13:17:05.935Z] =================================================================================================================== 00:07:16.710 [2024-11-17T13:17:05.935Z] Total : 22989.29 89.80 0.00 0.00 0.00 0.00 0.00 00:07:16.710 00:07:17.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.647 Nvme0n1 : 8.00 23016.50 89.91 0.00 0.00 0.00 0.00 0.00 00:07:17.647 [2024-11-17T13:17:06.872Z] =================================================================================================================== 00:07:17.647 [2024-11-17T13:17:06.872Z] Total : 23016.50 89.91 0.00 0.00 0.00 0.00 0.00 00:07:17.647 00:07:18.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.585 Nvme0n1 : 9.00 23029.33 89.96 0.00 0.00 0.00 0.00 0.00 00:07:18.585 [2024-11-17T13:17:07.810Z] =================================================================================================================== 00:07:18.585 [2024-11-17T13:17:07.810Z] Total : 23029.33 89.96 0.00 0.00 0.00 0.00 0.00 00:07:18.585 00:07:19.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.524 Nvme0n1 : 10.00 23045.10 90.02 0.00 0.00 0.00 0.00 0.00 00:07:19.524 [2024-11-17T13:17:08.749Z] =================================================================================================================== 00:07:19.524 [2024-11-17T13:17:08.749Z] Total : 23045.10 90.02 0.00 0.00 0.00 0.00 0.00 00:07:19.524 00:07:19.524 00:07:19.524 Latency(us) 00:07:19.524 [2024-11-17T13:17:08.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.524 Nvme0n1 : 10.00 23050.16 90.04 0.00 0.00 5550.13 2322.25 10599.74 00:07:19.524 [2024-11-17T13:17:08.749Z] =================================================================================================================== 00:07:19.524 [2024-11-17T13:17:08.749Z] Total : 23050.16 90.04 0.00 0.00 5550.13 2322.25 10599.74 00:07:19.524 { 00:07:19.524 "results": [ 00:07:19.524 { 00:07:19.524 "job": "Nvme0n1", 00:07:19.524 "core_mask": "0x2", 00:07:19.524 "workload": "randwrite", 00:07:19.524 "status": "finished", 00:07:19.524 "queue_depth": 128, 00:07:19.524 "io_size": 4096, 00:07:19.524 "runtime": 10.003357, 00:07:19.524 "iops": 23050.162060596256, 00:07:19.524 "mibps": 90.03969554920413, 00:07:19.524 "io_failed": 0, 00:07:19.524 "io_timeout": 0, 00:07:19.524 "avg_latency_us": 5550.130221851719, 00:07:19.524 "min_latency_us": 2322.2539130434784, 00:07:19.524 "max_latency_us": 10599.735652173913 00:07:19.524 } 00:07:19.524 ], 00:07:19.524 "core_count": 1 00:07:19.524 } 00:07:19.785 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1317737 00:07:19.785 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1317737 ']' 00:07:19.785 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1317737 00:07:19.785 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:19.785 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.785 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1317737 00:07:19.785 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:19.785 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:19.785 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1317737' 00:07:19.785 killing process with pid 1317737 00:07:19.785 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1317737 00:07:19.785 Received shutdown signal, test time was about 10.000000 seconds 00:07:19.785 00:07:19.785 Latency(us) 00:07:19.785 [2024-11-17T13:17:09.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.785 [2024-11-17T13:17:09.010Z] =================================================================================================================== 00:07:19.785 [2024-11-17T13:17:09.010Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:19.785 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1317737 00:07:19.785 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:20.044 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:20.304 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0284ef23-3cf2-45ae-a29d-5459de44e285 00:07:20.304 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:20.564 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:20.564 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:20.564 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1314632 00:07:20.564 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1314632 00:07:20.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1314632 Killed "${NVMF_APP[@]}" "$@" 00:07:20.564 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:20.564 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:20.564 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:20.564 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:20.564 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:20.564 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1319599 00:07:20.564 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1319599 00:07:20.564 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:20.564 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1319599 ']' 00:07:20.564 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.564 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.564 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.564 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.564 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:20.564 [2024-11-17 14:17:09.664870] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:20.564 [2024-11-17 14:17:09.664919] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.564 [2024-11-17 14:17:09.745403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.824 [2024-11-17 14:17:09.787561] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:20.824 [2024-11-17 14:17:09.787596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:20.824 [2024-11-17 14:17:09.787603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:20.824 [2024-11-17 14:17:09.787610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:20.824 [2024-11-17 14:17:09.787615] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:20.824 [2024-11-17 14:17:09.788190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.824 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.824 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:20.824 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:20.824 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:20.824 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:20.824 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.824 14:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:21.083 [2024-11-17 14:17:10.114238] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:21.083 [2024-11-17 14:17:10.114326] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:21.083 [2024-11-17 14:17:10.114359] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:21.083 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:21.083 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6fa751ad-836f-4a4c-897d-7c14c8188581 00:07:21.083 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6fa751ad-836f-4a4c-897d-7c14c8188581 00:07:21.083 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:21.083 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:21.083 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:21.083 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:21.083 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:21.343 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6fa751ad-836f-4a4c-897d-7c14c8188581 -t 2000 00:07:21.343 [ 00:07:21.343 { 00:07:21.343 "name": "6fa751ad-836f-4a4c-897d-7c14c8188581", 00:07:21.343 "aliases": [ 00:07:21.343 "lvs/lvol" 00:07:21.343 ], 00:07:21.343 "product_name": "Logical Volume", 00:07:21.343 "block_size": 4096, 00:07:21.343 "num_blocks": 38912, 00:07:21.343 "uuid": "6fa751ad-836f-4a4c-897d-7c14c8188581", 00:07:21.343 "assigned_rate_limits": { 00:07:21.343 "rw_ios_per_sec": 0, 00:07:21.343 "rw_mbytes_per_sec": 0, 00:07:21.343 "r_mbytes_per_sec": 0, 00:07:21.343 "w_mbytes_per_sec": 0 00:07:21.343 }, 00:07:21.343 "claimed": false, 00:07:21.343 "zoned": false, 00:07:21.343 "supported_io_types": { 00:07:21.343 "read": true, 00:07:21.343 "write": true, 00:07:21.343 "unmap": true, 00:07:21.343 "flush": false, 00:07:21.343 "reset": true, 00:07:21.343 "nvme_admin": false, 00:07:21.343 "nvme_io": false, 00:07:21.343 "nvme_io_md": false, 00:07:21.343 "write_zeroes": true, 00:07:21.343 "zcopy": false, 00:07:21.343 "get_zone_info": false, 00:07:21.343 "zone_management": false, 00:07:21.343 "zone_append": false, 00:07:21.343 "compare": false, 00:07:21.343 "compare_and_write": false, 00:07:21.343 "abort": false, 00:07:21.343 "seek_hole": true, 00:07:21.343 "seek_data": true, 00:07:21.343 "copy": false, 00:07:21.343 "nvme_iov_md": false 00:07:21.343 }, 00:07:21.343 "driver_specific": { 00:07:21.343 "lvol": { 00:07:21.343 "lvol_store_uuid": "0284ef23-3cf2-45ae-a29d-5459de44e285", 00:07:21.343 "base_bdev": "aio_bdev", 00:07:21.343 "thin_provision": false, 00:07:21.343 "num_allocated_clusters": 38, 00:07:21.343 "snapshot": false, 00:07:21.343 "clone": false, 00:07:21.343 "esnap_clone": false 00:07:21.343 } 00:07:21.343 } 00:07:21.343 } 00:07:21.343 ] 00:07:21.343 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:21.343 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0284ef23-3cf2-45ae-a29d-5459de44e285 00:07:21.343 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:21.603 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:21.603 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0284ef23-3cf2-45ae-a29d-5459de44e285 00:07:21.603 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:21.862 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:21.862 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:22.121 [2024-11-17 14:17:11.107108] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:22.121 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0284ef23-3cf2-45ae-a29d-5459de44e285 00:07:22.121 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:22.121 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0284ef23-3cf2-45ae-a29d-5459de44e285 00:07:22.121 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.121 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.121 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.121 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.121 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.121 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.121 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.121 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:22.121 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0284ef23-3cf2-45ae-a29d-5459de44e285 00:07:22.121 request: 00:07:22.121 { 00:07:22.121 "uuid": "0284ef23-3cf2-45ae-a29d-5459de44e285", 00:07:22.121 "method": "bdev_lvol_get_lvstores", 00:07:22.121 "req_id": 1 00:07:22.121 } 00:07:22.121 Got JSON-RPC error response 00:07:22.121 response: 00:07:22.121 { 00:07:22.121 "code": -19, 00:07:22.121 "message": "No such device" 00:07:22.121 } 00:07:22.379 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:22.379 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:22.379 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:22.379 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:22.379 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:22.379 aio_bdev 00:07:22.379 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6fa751ad-836f-4a4c-897d-7c14c8188581 00:07:22.379 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6fa751ad-836f-4a4c-897d-7c14c8188581 00:07:22.379 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:22.379 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:22.379 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:22.379 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:22.379 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:22.637 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6fa751ad-836f-4a4c-897d-7c14c8188581 -t 2000 00:07:22.896 [ 00:07:22.896 { 00:07:22.896 "name": "6fa751ad-836f-4a4c-897d-7c14c8188581", 00:07:22.896 "aliases": [ 00:07:22.896 "lvs/lvol" 00:07:22.896 ], 00:07:22.896 "product_name": "Logical Volume", 00:07:22.896 "block_size": 4096, 00:07:22.896 "num_blocks": 38912, 00:07:22.896 "uuid": "6fa751ad-836f-4a4c-897d-7c14c8188581", 00:07:22.896 "assigned_rate_limits": { 00:07:22.896 "rw_ios_per_sec": 0, 00:07:22.896 "rw_mbytes_per_sec": 0, 00:07:22.896 "r_mbytes_per_sec": 0, 00:07:22.896 "w_mbytes_per_sec": 0 00:07:22.896 }, 00:07:22.896 "claimed": false, 00:07:22.896 "zoned": false, 00:07:22.896 "supported_io_types": { 00:07:22.896 "read": true, 00:07:22.896 "write": true, 00:07:22.896 "unmap": true, 00:07:22.896 "flush": false, 00:07:22.896 "reset": true, 00:07:22.896 "nvme_admin": false, 00:07:22.896 "nvme_io": false, 00:07:22.896 "nvme_io_md": false, 00:07:22.896 "write_zeroes": true, 00:07:22.896 "zcopy": false, 00:07:22.896 "get_zone_info": false, 00:07:22.896 "zone_management": false, 00:07:22.896 "zone_append": false, 00:07:22.896 "compare": false, 00:07:22.896 "compare_and_write": false, 00:07:22.896 "abort": false, 00:07:22.896 "seek_hole": true, 00:07:22.896 "seek_data": true, 00:07:22.896 "copy": false, 00:07:22.896 "nvme_iov_md": false 00:07:22.896 }, 00:07:22.896 "driver_specific": { 00:07:22.896 "lvol": { 00:07:22.896 "lvol_store_uuid": "0284ef23-3cf2-45ae-a29d-5459de44e285", 00:07:22.896 "base_bdev": "aio_bdev", 00:07:22.896 "thin_provision": false, 00:07:22.896 "num_allocated_clusters": 38, 00:07:22.896 "snapshot": false, 00:07:22.896 "clone": false, 00:07:22.896 "esnap_clone": false 00:07:22.896 } 00:07:22.896 } 00:07:22.896 } 00:07:22.896 ] 00:07:22.896 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:22.896 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0284ef23-3cf2-45ae-a29d-5459de44e285 00:07:22.896 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:23.156 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:23.156 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0284ef23-3cf2-45ae-a29d-5459de44e285 00:07:23.156 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:23.156 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:23.156 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6fa751ad-836f-4a4c-897d-7c14c8188581 00:07:23.414 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0284ef23-3cf2-45ae-a29d-5459de44e285 00:07:23.672 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:23.932 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:23.932 00:07:23.932 real 0m17.029s 00:07:23.932 user 0m43.869s 00:07:23.932 sys 0m3.875s 00:07:23.932 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.932 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:23.932 ************************************ 00:07:23.932 END TEST lvs_grow_dirty 00:07:23.932 ************************************ 00:07:23.932 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:23.932 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:23.932 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:23.932 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:23.933 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:23.933 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:23.933 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:23.933 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:23.933 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:23.933 nvmf_trace.0 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:23.933 rmmod nvme_tcp 00:07:23.933 rmmod nvme_fabrics 00:07:23.933 rmmod nvme_keyring 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1319599 ']' 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1319599 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1319599 ']' 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1319599 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1319599 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1319599' 00:07:23.933 killing process with pid 1319599 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1319599 00:07:23.933 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1319599 00:07:24.193 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:24.193 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:24.193 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:24.193 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:24.193 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:24.193 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:24.193 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:24.193 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:24.193 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:24.193 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.193 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.193 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:26.731 00:07:26.731 real 0m41.964s 00:07:26.731 user 1m4.743s 00:07:26.731 sys 0m10.378s 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:26.731 ************************************ 00:07:26.731 END TEST nvmf_lvs_grow 00:07:26.731 ************************************ 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:26.731 ************************************ 00:07:26.731 START TEST nvmf_bdev_io_wait 00:07:26.731 ************************************ 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:26.731 * Looking for test storage... 00:07:26.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:26.731 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:26.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.732 --rc genhtml_branch_coverage=1 00:07:26.732 --rc genhtml_function_coverage=1 00:07:26.732 --rc genhtml_legend=1 00:07:26.732 --rc geninfo_all_blocks=1 00:07:26.732 --rc geninfo_unexecuted_blocks=1 00:07:26.732 00:07:26.732 ' 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:26.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.732 --rc genhtml_branch_coverage=1 00:07:26.732 --rc genhtml_function_coverage=1 00:07:26.732 --rc genhtml_legend=1 00:07:26.732 --rc geninfo_all_blocks=1 00:07:26.732 --rc geninfo_unexecuted_blocks=1 00:07:26.732 00:07:26.732 ' 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:26.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.732 --rc genhtml_branch_coverage=1 00:07:26.732 --rc genhtml_function_coverage=1 00:07:26.732 --rc genhtml_legend=1 00:07:26.732 --rc geninfo_all_blocks=1 00:07:26.732 --rc geninfo_unexecuted_blocks=1 00:07:26.732 00:07:26.732 ' 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:26.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.732 --rc genhtml_branch_coverage=1 00:07:26.732 --rc genhtml_function_coverage=1 00:07:26.732 --rc genhtml_legend=1 00:07:26.732 --rc geninfo_all_blocks=1 00:07:26.732 --rc geninfo_unexecuted_blocks=1 00:07:26.732 00:07:26.732 ' 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:26.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:26.732 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:33.498 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:33.498 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:33.498 Found net devices under 0000:86:00.0: cvl_0_0 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.498 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:33.499 Found net devices under 0000:86:00.1: cvl_0_1 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:33.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:33.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:07:33.499 00:07:33.499 --- 10.0.0.2 ping statistics --- 00:07:33.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.499 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:33.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:33.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:07:33.499 00:07:33.499 --- 10.0.0.1 ping statistics --- 00:07:33.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.499 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1323883 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1323883 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1323883 ']' 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.499 [2024-11-17 14:17:21.708556] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:33.499 [2024-11-17 14:17:21.708604] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.499 [2024-11-17 14:17:21.786548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.499 [2024-11-17 14:17:21.830828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.499 [2024-11-17 14:17:21.830866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.499 [2024-11-17 14:17:21.830874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.499 [2024-11-17 14:17:21.830880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.499 [2024-11-17 14:17:21.830885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.499 [2024-11-17 14:17:21.832337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.499 [2024-11-17 14:17:21.832451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.499 [2024-11-17 14:17:21.832485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.499 [2024-11-17 14:17:21.832486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.499 [2024-11-17 14:17:21.972390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.499 Malloc0 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.499 14:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.499 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.499 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:33.499 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.499 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.499 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.499 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:33.499 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.499 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.500 [2024-11-17 14:17:22.027670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1323910 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1323912 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:33.500 { 00:07:33.500 "params": { 00:07:33.500 "name": "Nvme$subsystem", 00:07:33.500 "trtype": "$TEST_TRANSPORT", 00:07:33.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:33.500 "adrfam": "ipv4", 00:07:33.500 "trsvcid": "$NVMF_PORT", 00:07:33.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:33.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:33.500 "hdgst": ${hdgst:-false}, 00:07:33.500 "ddgst": ${ddgst:-false} 00:07:33.500 }, 00:07:33.500 "method": "bdev_nvme_attach_controller" 00:07:33.500 } 00:07:33.500 EOF 00:07:33.500 )") 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1323914 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:33.500 { 00:07:33.500 "params": { 00:07:33.500 "name": "Nvme$subsystem", 00:07:33.500 "trtype": "$TEST_TRANSPORT", 00:07:33.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:33.500 "adrfam": "ipv4", 00:07:33.500 "trsvcid": "$NVMF_PORT", 00:07:33.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:33.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:33.500 "hdgst": ${hdgst:-false}, 00:07:33.500 "ddgst": ${ddgst:-false} 00:07:33.500 }, 00:07:33.500 "method": "bdev_nvme_attach_controller" 00:07:33.500 } 00:07:33.500 EOF 00:07:33.500 )") 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1323917 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:33.500 { 00:07:33.500 "params": { 00:07:33.500 "name": "Nvme$subsystem", 00:07:33.500 "trtype": "$TEST_TRANSPORT", 00:07:33.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:33.500 "adrfam": "ipv4", 00:07:33.500 "trsvcid": "$NVMF_PORT", 00:07:33.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:33.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:33.500 "hdgst": ${hdgst:-false}, 00:07:33.500 "ddgst": ${ddgst:-false} 00:07:33.500 }, 00:07:33.500 "method": "bdev_nvme_attach_controller" 00:07:33.500 } 00:07:33.500 EOF 00:07:33.500 )") 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:33.500 { 00:07:33.500 "params": { 00:07:33.500 "name": "Nvme$subsystem", 00:07:33.500 "trtype": "$TEST_TRANSPORT", 00:07:33.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:33.500 "adrfam": "ipv4", 00:07:33.500 "trsvcid": "$NVMF_PORT", 00:07:33.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:33.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:33.500 "hdgst": ${hdgst:-false}, 00:07:33.500 "ddgst": ${ddgst:-false} 00:07:33.500 }, 00:07:33.500 "method": "bdev_nvme_attach_controller" 00:07:33.500 } 00:07:33.500 EOF 00:07:33.500 )") 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1323910 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:33.500 "params": { 00:07:33.500 "name": "Nvme1", 00:07:33.500 "trtype": "tcp", 00:07:33.500 "traddr": "10.0.0.2", 00:07:33.500 "adrfam": "ipv4", 00:07:33.500 "trsvcid": "4420", 00:07:33.500 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:33.500 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:33.500 "hdgst": false, 00:07:33.500 "ddgst": false 00:07:33.500 }, 00:07:33.500 "method": "bdev_nvme_attach_controller" 00:07:33.500 }' 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:33.500 "params": { 00:07:33.500 "name": "Nvme1", 00:07:33.500 "trtype": "tcp", 00:07:33.500 "traddr": "10.0.0.2", 00:07:33.500 "adrfam": "ipv4", 00:07:33.500 "trsvcid": "4420", 00:07:33.500 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:33.500 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:33.500 "hdgst": false, 00:07:33.500 "ddgst": false 00:07:33.500 }, 00:07:33.500 "method": "bdev_nvme_attach_controller" 00:07:33.500 }' 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:33.500 "params": { 00:07:33.500 "name": "Nvme1", 00:07:33.500 "trtype": "tcp", 00:07:33.500 "traddr": "10.0.0.2", 00:07:33.500 "adrfam": "ipv4", 00:07:33.500 "trsvcid": "4420", 00:07:33.500 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:33.500 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:33.500 "hdgst": false, 00:07:33.500 "ddgst": false 00:07:33.500 }, 00:07:33.500 "method": "bdev_nvme_attach_controller" 00:07:33.500 }' 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:33.500 14:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:33.500 "params": { 00:07:33.500 "name": "Nvme1", 00:07:33.500 "trtype": "tcp", 00:07:33.500 "traddr": "10.0.0.2", 00:07:33.500 "adrfam": "ipv4", 00:07:33.500 "trsvcid": "4420", 00:07:33.500 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:33.500 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:33.500 "hdgst": false, 00:07:33.500 "ddgst": false 00:07:33.500 }, 00:07:33.500 "method": "bdev_nvme_attach_controller" 00:07:33.500 }' 00:07:33.500 [2024-11-17 14:17:22.077588] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:33.500 [2024-11-17 14:17:22.077638] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:33.500 [2024-11-17 14:17:22.080780] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:33.500 [2024-11-17 14:17:22.080826] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:33.500 [2024-11-17 14:17:22.083586] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:33.500 [2024-11-17 14:17:22.083627] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:33.500 [2024-11-17 14:17:22.084990] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:33.500 [2024-11-17 14:17:22.085031] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:33.500 [2024-11-17 14:17:22.265484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.500 [2024-11-17 14:17:22.308639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:33.500 [2024-11-17 14:17:22.362427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.500 [2024-11-17 14:17:22.410110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:33.500 [2024-11-17 14:17:22.423021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.500 [2024-11-17 14:17:22.465838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:33.500 [2024-11-17 14:17:22.483831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.501 [2024-11-17 14:17:22.526783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:33.501 Running I/O for 1 seconds... 00:07:33.501 Running I/O for 1 seconds... 00:07:33.501 Running I/O for 1 seconds... 00:07:33.760 Running I/O for 1 seconds... 00:07:34.696 12166.00 IOPS, 47.52 MiB/s 00:07:34.696 Latency(us) 00:07:34.696 [2024-11-17T13:17:23.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.696 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:34.696 Nvme1n1 : 1.01 12223.72 47.75 0.00 0.00 10436.28 5242.88 16982.37 00:07:34.696 [2024-11-17T13:17:23.922Z] =================================================================================================================== 00:07:34.697 [2024-11-17T13:17:23.922Z] Total : 12223.72 47.75 0.00 0.00 10436.28 5242.88 16982.37 00:07:34.697 247072.00 IOPS, 965.12 MiB/s 00:07:34.697 Latency(us) 00:07:34.697 [2024-11-17T13:17:23.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.697 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:34.697 Nvme1n1 : 1.00 246684.32 963.61 0.00 0.00 516.69 233.29 1552.92 00:07:34.697 [2024-11-17T13:17:23.922Z] =================================================================================================================== 00:07:34.697 [2024-11-17T13:17:23.922Z] Total : 246684.32 963.61 0.00 0.00 516.69 233.29 1552.92 00:07:34.697 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1323912 00:07:34.697 11465.00 IOPS, 44.79 MiB/s 00:07:34.697 Latency(us) 00:07:34.697 [2024-11-17T13:17:23.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.697 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:34.697 Nvme1n1 : 1.01 11534.96 45.06 0.00 0.00 11063.63 4302.58 22339.23 00:07:34.697 [2024-11-17T13:17:23.922Z] =================================================================================================================== 00:07:34.697 [2024-11-17T13:17:23.922Z] Total : 11534.96 45.06 0.00 0.00 11063.63 4302.58 22339.23 00:07:34.697 9818.00 IOPS, 38.35 MiB/s 00:07:34.697 Latency(us) 00:07:34.697 [2024-11-17T13:17:23.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.697 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:34.697 Nvme1n1 : 1.01 9891.08 38.64 0.00 0.00 12900.76 4758.48 25644.52 00:07:34.697 [2024-11-17T13:17:23.922Z] =================================================================================================================== 00:07:34.697 [2024-11-17T13:17:23.922Z] Total : 9891.08 38.64 0.00 0.00 12900.76 4758.48 25644.52 00:07:34.697 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1323914 00:07:34.697 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1323917 00:07:34.697 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.697 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.697 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:34.697 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.697 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:34.697 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:34.697 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:34.697 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:34.697 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:34.697 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:34.697 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:34.697 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:34.697 rmmod nvme_tcp 00:07:34.697 rmmod nvme_fabrics 00:07:34.697 rmmod nvme_keyring 00:07:34.697 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:34.957 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:34.957 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:34.957 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1323883 ']' 00:07:34.957 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1323883 00:07:34.957 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1323883 ']' 00:07:34.957 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1323883 00:07:34.957 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:34.957 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.957 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1323883 00:07:34.957 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.957 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.957 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1323883' 00:07:34.957 killing process with pid 1323883 00:07:34.957 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1323883 00:07:34.957 14:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1323883 00:07:34.957 14:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:34.957 14:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:34.957 14:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:34.957 14:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:34.957 14:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:34.957 14:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:34.957 14:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:34.957 14:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:34.957 14:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:34.957 14:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.957 14:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.957 14:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:37.493 00:07:37.493 real 0m10.744s 00:07:37.493 user 0m15.794s 00:07:37.493 sys 0m6.244s 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:37.493 ************************************ 00:07:37.493 END TEST nvmf_bdev_io_wait 00:07:37.493 ************************************ 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:37.493 ************************************ 00:07:37.493 START TEST nvmf_queue_depth 00:07:37.493 ************************************ 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:37.493 * Looking for test storage... 00:07:37.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.493 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:37.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.493 --rc genhtml_branch_coverage=1 00:07:37.493 --rc genhtml_function_coverage=1 00:07:37.493 --rc genhtml_legend=1 00:07:37.493 --rc geninfo_all_blocks=1 00:07:37.493 --rc geninfo_unexecuted_blocks=1 00:07:37.493 00:07:37.493 ' 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:37.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.494 --rc genhtml_branch_coverage=1 00:07:37.494 --rc genhtml_function_coverage=1 00:07:37.494 --rc genhtml_legend=1 00:07:37.494 --rc geninfo_all_blocks=1 00:07:37.494 --rc geninfo_unexecuted_blocks=1 00:07:37.494 00:07:37.494 ' 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:37.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.494 --rc genhtml_branch_coverage=1 00:07:37.494 --rc genhtml_function_coverage=1 00:07:37.494 --rc genhtml_legend=1 00:07:37.494 --rc geninfo_all_blocks=1 00:07:37.494 --rc geninfo_unexecuted_blocks=1 00:07:37.494 00:07:37.494 ' 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:37.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.494 --rc genhtml_branch_coverage=1 00:07:37.494 --rc genhtml_function_coverage=1 00:07:37.494 --rc genhtml_legend=1 00:07:37.494 --rc geninfo_all_blocks=1 00:07:37.494 --rc geninfo_unexecuted_blocks=1 00:07:37.494 00:07:37.494 ' 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:37.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:37.494 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:44.069 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:44.069 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:44.069 Found net devices under 0000:86:00.0: cvl_0_0 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:44.069 Found net devices under 0000:86:00.1: cvl_0_1 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:44.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:07:44.069 00:07:44.069 --- 10.0.0.2 ping statistics --- 00:07:44.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.069 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:07:44.069 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:44.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:07:44.069 00:07:44.069 --- 10.0.0.1 ping statistics --- 00:07:44.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.069 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1327923 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1327923 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1327923 ']' 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.070 [2024-11-17 14:17:32.575282] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:44.070 [2024-11-17 14:17:32.575324] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.070 [2024-11-17 14:17:32.635715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.070 [2024-11-17 14:17:32.674531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.070 [2024-11-17 14:17:32.674564] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.070 [2024-11-17 14:17:32.674571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.070 [2024-11-17 14:17:32.674577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.070 [2024-11-17 14:17:32.674582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.070 [2024-11-17 14:17:32.675137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.070 [2024-11-17 14:17:32.822511] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.070 Malloc0 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.070 [2024-11-17 14:17:32.872882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1327949 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1327949 /var/tmp/bdevperf.sock 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1327949 ']' 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:44.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.070 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.070 [2024-11-17 14:17:32.922692] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:44.070 [2024-11-17 14:17:32.922734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327949 ] 00:07:44.070 [2024-11-17 14:17:32.998169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.070 [2024-11-17 14:17:33.040590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.070 14:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.070 14:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:44.070 14:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:44.070 14:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.070 14:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.330 NVMe0n1 00:07:44.330 14:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.330 14:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:44.330 Running I/O for 10 seconds... 00:07:46.646 11889.00 IOPS, 46.44 MiB/s [2024-11-17T13:17:36.808Z] 12053.00 IOPS, 47.08 MiB/s [2024-11-17T13:17:37.745Z] 12040.67 IOPS, 47.03 MiB/s [2024-11-17T13:17:38.683Z] 12082.50 IOPS, 47.20 MiB/s [2024-11-17T13:17:39.620Z] 12136.20 IOPS, 47.41 MiB/s [2024-11-17T13:17:40.557Z] 12205.33 IOPS, 47.68 MiB/s [2024-11-17T13:17:41.494Z] 12243.57 IOPS, 47.83 MiB/s [2024-11-17T13:17:42.873Z] 12266.38 IOPS, 47.92 MiB/s [2024-11-17T13:17:43.810Z] 12267.33 IOPS, 47.92 MiB/s [2024-11-17T13:17:43.810Z] 12271.30 IOPS, 47.93 MiB/s 00:07:54.585 Latency(us) 00:07:54.585 [2024-11-17T13:17:43.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.585 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:54.585 Verification LBA range: start 0x0 length 0x4000 00:07:54.585 NVMe0n1 : 10.05 12308.23 48.08 0.00 0.00 82934.46 11739.49 52884.70 00:07:54.585 [2024-11-17T13:17:43.810Z] =================================================================================================================== 00:07:54.585 [2024-11-17T13:17:43.810Z] Total : 12308.23 48.08 0.00 0.00 82934.46 11739.49 52884.70 00:07:54.585 { 00:07:54.585 "results": [ 00:07:54.585 { 00:07:54.585 "job": "NVMe0n1", 00:07:54.585 "core_mask": "0x1", 00:07:54.585 "workload": "verify", 00:07:54.585 "status": "finished", 00:07:54.585 "verify_range": { 00:07:54.585 "start": 0, 00:07:54.585 "length": 16384 00:07:54.585 }, 00:07:54.585 "queue_depth": 1024, 00:07:54.585 "io_size": 4096, 00:07:54.585 "runtime": 10.052788, 00:07:54.585 "iops": 12308.227329572652, 00:07:54.585 "mibps": 48.07901300614317, 00:07:54.585 "io_failed": 0, 00:07:54.585 "io_timeout": 0, 00:07:54.585 "avg_latency_us": 82934.46017426162, 00:07:54.585 "min_latency_us": 11739.492173913044, 00:07:54.585 "max_latency_us": 52884.702608695654 00:07:54.585 } 00:07:54.585 ], 00:07:54.585 "core_count": 1 00:07:54.585 } 00:07:54.585 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1327949 00:07:54.585 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1327949 ']' 00:07:54.585 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1327949 00:07:54.585 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:54.585 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.585 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1327949 00:07:54.585 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.585 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.585 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1327949' 00:07:54.585 killing process with pid 1327949 00:07:54.585 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1327949 00:07:54.585 Received shutdown signal, test time was about 10.000000 seconds 00:07:54.585 00:07:54.585 Latency(us) 00:07:54.585 [2024-11-17T13:17:43.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.585 [2024-11-17T13:17:43.810Z] =================================================================================================================== 00:07:54.585 [2024-11-17T13:17:43.810Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:54.585 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1327949 00:07:54.585 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:54.585 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:54.585 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:54.585 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:54.585 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:54.585 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:54.585 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:54.585 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:54.585 rmmod nvme_tcp 00:07:54.585 rmmod nvme_fabrics 00:07:54.585 rmmod nvme_keyring 00:07:54.845 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:54.845 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:54.845 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:54.845 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1327923 ']' 00:07:54.845 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1327923 00:07:54.845 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1327923 ']' 00:07:54.845 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1327923 00:07:54.845 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:54.845 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.845 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1327923 00:07:54.845 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:54.845 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:54.845 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1327923' 00:07:54.845 killing process with pid 1327923 00:07:54.845 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1327923 00:07:54.845 14:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1327923 00:07:54.845 14:17:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:54.845 14:17:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:54.845 14:17:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:54.845 14:17:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:54.845 14:17:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:54.845 14:17:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:54.845 14:17:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:55.104 14:17:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:55.104 14:17:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:55.104 14:17:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.104 14:17:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.104 14:17:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.010 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:57.010 00:07:57.010 real 0m19.852s 00:07:57.010 user 0m23.192s 00:07:57.010 sys 0m6.163s 00:07:57.010 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.010 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:57.010 ************************************ 00:07:57.010 END TEST nvmf_queue_depth 00:07:57.010 ************************************ 00:07:57.010 14:17:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:57.010 14:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:57.010 14:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.011 14:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.011 ************************************ 00:07:57.011 START TEST nvmf_target_multipath 00:07:57.011 ************************************ 00:07:57.011 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:57.271 * Looking for test storage... 00:07:57.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:57.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.271 --rc genhtml_branch_coverage=1 00:07:57.271 --rc genhtml_function_coverage=1 00:07:57.271 --rc genhtml_legend=1 00:07:57.271 --rc geninfo_all_blocks=1 00:07:57.271 --rc geninfo_unexecuted_blocks=1 00:07:57.271 00:07:57.271 ' 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:57.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.271 --rc genhtml_branch_coverage=1 00:07:57.271 --rc genhtml_function_coverage=1 00:07:57.271 --rc genhtml_legend=1 00:07:57.271 --rc geninfo_all_blocks=1 00:07:57.271 --rc geninfo_unexecuted_blocks=1 00:07:57.271 00:07:57.271 ' 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:57.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.271 --rc genhtml_branch_coverage=1 00:07:57.271 --rc genhtml_function_coverage=1 00:07:57.271 --rc genhtml_legend=1 00:07:57.271 --rc geninfo_all_blocks=1 00:07:57.271 --rc geninfo_unexecuted_blocks=1 00:07:57.271 00:07:57.271 ' 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:57.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.271 --rc genhtml_branch_coverage=1 00:07:57.271 --rc genhtml_function_coverage=1 00:07:57.271 --rc genhtml_legend=1 00:07:57.271 --rc geninfo_all_blocks=1 00:07:57.271 --rc geninfo_unexecuted_blocks=1 00:07:57.271 00:07:57.271 ' 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:57.271 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:57.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:57.272 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:03.885 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:03.885 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.885 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:03.886 Found net devices under 0000:86:00.0: cvl_0_0 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:03.886 Found net devices under 0000:86:00.1: cvl_0_1 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:03.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:08:03.886 00:08:03.886 --- 10.0.0.2 ping statistics --- 00:08:03.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.886 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:08:03.886 00:08:03.886 --- 10.0.0.1 ping statistics --- 00:08:03.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.886 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:03.886 only one NIC for nvmf test 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:03.886 rmmod nvme_tcp 00:08:03.886 rmmod nvme_fabrics 00:08:03.886 rmmod nvme_keyring 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:03.886 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.796 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:05.796 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:05.796 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:05.796 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:05.796 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:05.796 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:05.796 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:05.796 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:05.796 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:05.796 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:05.796 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:05.796 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:05.796 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:05.796 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:05.796 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:05.796 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:05.797 00:08:05.797 real 0m8.467s 00:08:05.797 user 0m1.797s 00:08:05.797 sys 0m4.676s 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:05.797 ************************************ 00:08:05.797 END TEST nvmf_target_multipath 00:08:05.797 ************************************ 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:05.797 ************************************ 00:08:05.797 START TEST nvmf_zcopy 00:08:05.797 ************************************ 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:05.797 * Looking for test storage... 00:08:05.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:05.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.797 --rc genhtml_branch_coverage=1 00:08:05.797 --rc genhtml_function_coverage=1 00:08:05.797 --rc genhtml_legend=1 00:08:05.797 --rc geninfo_all_blocks=1 00:08:05.797 --rc geninfo_unexecuted_blocks=1 00:08:05.797 00:08:05.797 ' 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:05.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.797 --rc genhtml_branch_coverage=1 00:08:05.797 --rc genhtml_function_coverage=1 00:08:05.797 --rc genhtml_legend=1 00:08:05.797 --rc geninfo_all_blocks=1 00:08:05.797 --rc geninfo_unexecuted_blocks=1 00:08:05.797 00:08:05.797 ' 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:05.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.797 --rc genhtml_branch_coverage=1 00:08:05.797 --rc genhtml_function_coverage=1 00:08:05.797 --rc genhtml_legend=1 00:08:05.797 --rc geninfo_all_blocks=1 00:08:05.797 --rc geninfo_unexecuted_blocks=1 00:08:05.797 00:08:05.797 ' 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:05.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.797 --rc genhtml_branch_coverage=1 00:08:05.797 --rc genhtml_function_coverage=1 00:08:05.797 --rc genhtml_legend=1 00:08:05.797 --rc geninfo_all_blocks=1 00:08:05.797 --rc geninfo_unexecuted_blocks=1 00:08:05.797 00:08:05.797 ' 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.797 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:05.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:05.798 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:12.378 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:12.378 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:12.378 Found net devices under 0000:86:00.0: cvl_0_0 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:12.378 Found net devices under 0000:86:00.1: cvl_0_1 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:12.378 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:12.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:08:12.379 00:08:12.379 --- 10.0.0.2 ping statistics --- 00:08:12.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.379 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:12.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:08:12.379 00:08:12.379 --- 10.0.0.1 ping statistics --- 00:08:12.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.379 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1336928 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1336928 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1336928 ']' 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.379 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:12.379 [2024-11-17 14:18:01.030621] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:08:12.379 [2024-11-17 14:18:01.030666] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.379 [2024-11-17 14:18:01.112109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.379 [2024-11-17 14:18:01.154142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.379 [2024-11-17 14:18:01.154178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.379 [2024-11-17 14:18:01.154187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.379 [2024-11-17 14:18:01.154194] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.379 [2024-11-17 14:18:01.154200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.379 [2024-11-17 14:18:01.154769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:12.379 [2024-11-17 14:18:01.303021] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:12.379 [2024-11-17 14:18:01.323194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:12.379 malloc0 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:12.379 { 00:08:12.379 "params": { 00:08:12.379 "name": "Nvme$subsystem", 00:08:12.379 "trtype": "$TEST_TRANSPORT", 00:08:12.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:12.379 "adrfam": "ipv4", 00:08:12.379 "trsvcid": "$NVMF_PORT", 00:08:12.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:12.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:12.379 "hdgst": ${hdgst:-false}, 00:08:12.379 "ddgst": ${ddgst:-false} 00:08:12.379 }, 00:08:12.379 "method": "bdev_nvme_attach_controller" 00:08:12.379 } 00:08:12.379 EOF 00:08:12.379 )") 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:12.379 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:12.379 "params": { 00:08:12.380 "name": "Nvme1", 00:08:12.380 "trtype": "tcp", 00:08:12.380 "traddr": "10.0.0.2", 00:08:12.380 "adrfam": "ipv4", 00:08:12.380 "trsvcid": "4420", 00:08:12.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:12.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:12.380 "hdgst": false, 00:08:12.380 "ddgst": false 00:08:12.380 }, 00:08:12.380 "method": "bdev_nvme_attach_controller" 00:08:12.380 }' 00:08:12.380 [2024-11-17 14:18:01.402971] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:08:12.380 [2024-11-17 14:18:01.403018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336975 ] 00:08:12.380 [2024-11-17 14:18:01.478243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.380 [2024-11-17 14:18:01.519757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.639 Running I/O for 10 seconds... 00:08:14.589 8367.00 IOPS, 65.37 MiB/s [2024-11-17T13:18:05.194Z] 8402.50 IOPS, 65.64 MiB/s [2024-11-17T13:18:06.132Z] 8439.67 IOPS, 65.93 MiB/s [2024-11-17T13:18:07.070Z] 8454.25 IOPS, 66.05 MiB/s [2024-11-17T13:18:08.010Z] 8458.60 IOPS, 66.08 MiB/s [2024-11-17T13:18:08.949Z] 8462.00 IOPS, 66.11 MiB/s [2024-11-17T13:18:09.889Z] 8473.86 IOPS, 66.20 MiB/s [2024-11-17T13:18:11.271Z] 8482.38 IOPS, 66.27 MiB/s [2024-11-17T13:18:11.840Z] 8461.78 IOPS, 66.11 MiB/s [2024-11-17T13:18:12.100Z] 8469.60 IOPS, 66.17 MiB/s 00:08:22.875 Latency(us) 00:08:22.875 [2024-11-17T13:18:12.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.875 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:22.875 Verification LBA range: start 0x0 length 0x1000 00:08:22.875 Nvme1n1 : 10.01 8471.54 66.18 0.00 0.00 15066.12 826.32 23478.98 00:08:22.875 [2024-11-17T13:18:12.100Z] =================================================================================================================== 00:08:22.875 [2024-11-17T13:18:12.100Z] Total : 8471.54 66.18 0.00 0.00 15066.12 826.32 23478.98 00:08:22.875 14:18:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1339239 00:08:22.875 14:18:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:22.875 14:18:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:22.875 14:18:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:22.875 14:18:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:22.875 14:18:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:22.875 14:18:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:22.875 14:18:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:22.875 14:18:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:22.875 { 00:08:22.875 "params": { 00:08:22.875 "name": "Nvme$subsystem", 00:08:22.875 "trtype": "$TEST_TRANSPORT", 00:08:22.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:22.875 "adrfam": "ipv4", 00:08:22.875 "trsvcid": "$NVMF_PORT", 00:08:22.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:22.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:22.875 "hdgst": ${hdgst:-false}, 00:08:22.875 "ddgst": ${ddgst:-false} 00:08:22.875 }, 00:08:22.875 "method": "bdev_nvme_attach_controller" 00:08:22.875 } 00:08:22.875 EOF 00:08:22.875 )") 00:08:22.875 [2024-11-17 14:18:12.001803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.876 [2024-11-17 14:18:12.001834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.876 14:18:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:22.876 14:18:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:22.876 14:18:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:22.876 14:18:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:22.876 "params": { 00:08:22.876 "name": "Nvme1", 00:08:22.876 "trtype": "tcp", 00:08:22.876 "traddr": "10.0.0.2", 00:08:22.876 "adrfam": "ipv4", 00:08:22.876 "trsvcid": "4420", 00:08:22.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:22.876 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:22.876 "hdgst": false, 00:08:22.876 "ddgst": false 00:08:22.876 }, 00:08:22.876 "method": "bdev_nvme_attach_controller" 00:08:22.876 }' 00:08:22.876 [2024-11-17 14:18:12.013806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.876 [2024-11-17 14:18:12.013820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.876 [2024-11-17 14:18:12.025835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.876 [2024-11-17 14:18:12.025846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.876 [2024-11-17 14:18:12.037865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.876 [2024-11-17 14:18:12.037875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.876 [2024-11-17 14:18:12.040402] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:08:22.876 [2024-11-17 14:18:12.040448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1339239 ] 00:08:22.876 [2024-11-17 14:18:12.049899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.876 [2024-11-17 14:18:12.049909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.876 [2024-11-17 14:18:12.061934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.876 [2024-11-17 14:18:12.061949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.876 [2024-11-17 14:18:12.073966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.876 [2024-11-17 14:18:12.073976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.876 [2024-11-17 14:18:12.085997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.876 [2024-11-17 14:18:12.086006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.098032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.136 [2024-11-17 14:18:12.098043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.110061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.136 [2024-11-17 14:18:12.110070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.113257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.136 [2024-11-17 14:18:12.122092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.136 [2024-11-17 14:18:12.122104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.134126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.136 [2024-11-17 14:18:12.134138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.146155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.136 [2024-11-17 14:18:12.146166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.154943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.136 [2024-11-17 14:18:12.158187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.136 [2024-11-17 14:18:12.158198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.170235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.136 [2024-11-17 14:18:12.170253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.182259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.136 [2024-11-17 14:18:12.182276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.194290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.136 [2024-11-17 14:18:12.194303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.206319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.136 [2024-11-17 14:18:12.206330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.218355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.136 [2024-11-17 14:18:12.218367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.230383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.136 [2024-11-17 14:18:12.230393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.242412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.136 [2024-11-17 14:18:12.242422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.254462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.136 [2024-11-17 14:18:12.254483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.266486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.136 [2024-11-17 14:18:12.266499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.278521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.136 [2024-11-17 14:18:12.278534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.290554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.136 [2024-11-17 14:18:12.290563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.302593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.136 [2024-11-17 14:18:12.302604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.314617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.136 [2024-11-17 14:18:12.314628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.326713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.136 [2024-11-17 14:18:12.326728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.338748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.136 [2024-11-17 14:18:12.338762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.136 [2024-11-17 14:18:12.350784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.137 [2024-11-17 14:18:12.350806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.397 Running I/O for 5 seconds... 00:08:23.397 [2024-11-17 14:18:12.362811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.397 [2024-11-17 14:18:12.362822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.397 [2024-11-17 14:18:12.378510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.397 [2024-11-17 14:18:12.378530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.397 [2024-11-17 14:18:12.392942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.397 [2024-11-17 14:18:12.392961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.397 [2024-11-17 14:18:12.406633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.397 [2024-11-17 14:18:12.406653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.397 [2024-11-17 14:18:12.420990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.397 [2024-11-17 14:18:12.421009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.397 [2024-11-17 14:18:12.432304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.397 [2024-11-17 14:18:12.432323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.397 [2024-11-17 14:18:12.446879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.397 [2024-11-17 14:18:12.446898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.397 [2024-11-17 14:18:12.457559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.397 [2024-11-17 14:18:12.457578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.397 [2024-11-17 14:18:12.472318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.397 [2024-11-17 14:18:12.472337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.397 [2024-11-17 14:18:12.483328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.397 [2024-11-17 14:18:12.483346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.397 [2024-11-17 14:18:12.497945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.397 [2024-11-17 14:18:12.497964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.397 [2024-11-17 14:18:12.511959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.397 [2024-11-17 14:18:12.511978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.397 [2024-11-17 14:18:12.526148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.397 [2024-11-17 14:18:12.526166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.397 [2024-11-17 14:18:12.540337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.397 [2024-11-17 14:18:12.540361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.397 [2024-11-17 14:18:12.551987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.397 [2024-11-17 14:18:12.552006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.397 [2024-11-17 14:18:12.566263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.397 [2024-11-17 14:18:12.566281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.397 [2024-11-17 14:18:12.580238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.397 [2024-11-17 14:18:12.580256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.397 [2024-11-17 14:18:12.594122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.397 [2024-11-17 14:18:12.594141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.397 [2024-11-17 14:18:12.608251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.397 [2024-11-17 14:18:12.608270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.657 [2024-11-17 14:18:12.621968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.657 [2024-11-17 14:18:12.621987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.657 [2024-11-17 14:18:12.636342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.657 [2024-11-17 14:18:12.636369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.657 [2024-11-17 14:18:12.647743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.657 [2024-11-17 14:18:12.647762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.657 [2024-11-17 14:18:12.662206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.657 [2024-11-17 14:18:12.662226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.657 [2024-11-17 14:18:12.676281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.657 [2024-11-17 14:18:12.676300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.657 [2024-11-17 14:18:12.691252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.657 [2024-11-17 14:18:12.691272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.657 [2024-11-17 14:18:12.707085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.657 [2024-11-17 14:18:12.707104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.657 [2024-11-17 14:18:12.717701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.657 [2024-11-17 14:18:12.717720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.657 [2024-11-17 14:18:12.732469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.657 [2024-11-17 14:18:12.732488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.657 [2024-11-17 14:18:12.746638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.657 [2024-11-17 14:18:12.746659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.657 [2024-11-17 14:18:12.758124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.657 [2024-11-17 14:18:12.758142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.657 [2024-11-17 14:18:12.772466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.657 [2024-11-17 14:18:12.772486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.657 [2024-11-17 14:18:12.786316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.657 [2024-11-17 14:18:12.786335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.657 [2024-11-17 14:18:12.800585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.657 [2024-11-17 14:18:12.800604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.657 [2024-11-17 14:18:12.814492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.657 [2024-11-17 14:18:12.814518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.657 [2024-11-17 14:18:12.828445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.657 [2024-11-17 14:18:12.828464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.657 [2024-11-17 14:18:12.842625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.657 [2024-11-17 14:18:12.842644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.658 [2024-11-17 14:18:12.856819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.658 [2024-11-17 14:18:12.856839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.658 [2024-11-17 14:18:12.870962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.658 [2024-11-17 14:18:12.870981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.918 [2024-11-17 14:18:12.885023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.918 [2024-11-17 14:18:12.885043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.918 [2024-11-17 14:18:12.899106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.918 [2024-11-17 14:18:12.899125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.918 [2024-11-17 14:18:12.913138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.918 [2024-11-17 14:18:12.913157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.918 [2024-11-17 14:18:12.927693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.918 [2024-11-17 14:18:12.927711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.918 [2024-11-17 14:18:12.943138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.918 [2024-11-17 14:18:12.943159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.918 [2024-11-17 14:18:12.957638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.918 [2024-11-17 14:18:12.957659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.918 [2024-11-17 14:18:12.968262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.918 [2024-11-17 14:18:12.968281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.918 [2024-11-17 14:18:12.982637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.918 [2024-11-17 14:18:12.982657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.918 [2024-11-17 14:18:12.996629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.918 [2024-11-17 14:18:12.996649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.918 [2024-11-17 14:18:13.010440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.918 [2024-11-17 14:18:13.010459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.918 [2024-11-17 14:18:13.024444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.918 [2024-11-17 14:18:13.024463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.918 [2024-11-17 14:18:13.038288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.918 [2024-11-17 14:18:13.038306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.918 [2024-11-17 14:18:13.052723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.918 [2024-11-17 14:18:13.052742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.918 [2024-11-17 14:18:13.064361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.918 [2024-11-17 14:18:13.064380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.918 [2024-11-17 14:18:13.078819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.918 [2024-11-17 14:18:13.078837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.918 [2024-11-17 14:18:13.092702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.918 [2024-11-17 14:18:13.092720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.918 [2024-11-17 14:18:13.106835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.918 [2024-11-17 14:18:13.106853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.918 [2024-11-17 14:18:13.120571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.918 [2024-11-17 14:18:13.120589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.918 [2024-11-17 14:18:13.134562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.918 [2024-11-17 14:18:13.134591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.178 [2024-11-17 14:18:13.148760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.178 [2024-11-17 14:18:13.148779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.178 [2024-11-17 14:18:13.162653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.178 [2024-11-17 14:18:13.162671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.178 [2024-11-17 14:18:13.176580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.178 [2024-11-17 14:18:13.176598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.178 [2024-11-17 14:18:13.190631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.178 [2024-11-17 14:18:13.190649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.178 [2024-11-17 14:18:13.205131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.178 [2024-11-17 14:18:13.205150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.178 [2024-11-17 14:18:13.216546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.178 [2024-11-17 14:18:13.216564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.178 [2024-11-17 14:18:13.230808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.178 [2024-11-17 14:18:13.230826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.178 [2024-11-17 14:18:13.244839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.178 [2024-11-17 14:18:13.244857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.178 [2024-11-17 14:18:13.255828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.178 [2024-11-17 14:18:13.255847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.178 [2024-11-17 14:18:13.270212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.178 [2024-11-17 14:18:13.270231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.179 [2024-11-17 14:18:13.284326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.179 [2024-11-17 14:18:13.284345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.179 [2024-11-17 14:18:13.298480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.179 [2024-11-17 14:18:13.298499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.179 [2024-11-17 14:18:13.312842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.179 [2024-11-17 14:18:13.312861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.179 [2024-11-17 14:18:13.323803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.179 [2024-11-17 14:18:13.323822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.179 [2024-11-17 14:18:13.338301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.179 [2024-11-17 14:18:13.338319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.179 [2024-11-17 14:18:13.352229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.179 [2024-11-17 14:18:13.352248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.179 16471.00 IOPS, 128.68 MiB/s [2024-11-17T13:18:13.404Z] [2024-11-17 14:18:13.366395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.179 [2024-11-17 14:18:13.366414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.179 [2024-11-17 14:18:13.377441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.179 [2024-11-17 14:18:13.377464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.179 [2024-11-17 14:18:13.391993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.179 [2024-11-17 14:18:13.392011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.439 [2024-11-17 14:18:13.405863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.439 [2024-11-17 14:18:13.405882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.439 [2024-11-17 14:18:13.420454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.439 [2024-11-17 14:18:13.420473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.439 [2024-11-17 14:18:13.431741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.439 [2024-11-17 14:18:13.431759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.439 [2024-11-17 14:18:13.446249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.439 [2024-11-17 14:18:13.446268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.440 [2024-11-17 14:18:13.460029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.440 [2024-11-17 14:18:13.460048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.440 [2024-11-17 14:18:13.474257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.440 [2024-11-17 14:18:13.474276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.440 [2024-11-17 14:18:13.488577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.440 [2024-11-17 14:18:13.488596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.440 [2024-11-17 14:18:13.502613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.440 [2024-11-17 14:18:13.502631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.440 [2024-11-17 14:18:13.516504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.440 [2024-11-17 14:18:13.516522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.440 [2024-11-17 14:18:13.530326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.440 [2024-11-17 14:18:13.530344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.440 [2024-11-17 14:18:13.544478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.440 [2024-11-17 14:18:13.544496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.440 [2024-11-17 14:18:13.555072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.440 [2024-11-17 14:18:13.555090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.440 [2024-11-17 14:18:13.569928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.440 [2024-11-17 14:18:13.569946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.440 [2024-11-17 14:18:13.580813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.440 [2024-11-17 14:18:13.580831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.440 [2024-11-17 14:18:13.595259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.440 [2024-11-17 14:18:13.595276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.440 [2024-11-17 14:18:13.609265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.440 [2024-11-17 14:18:13.609284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.440 [2024-11-17 14:18:13.623199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.440 [2024-11-17 14:18:13.623217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.440 [2024-11-17 14:18:13.637329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.440 [2024-11-17 14:18:13.637358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.440 [2024-11-17 14:18:13.651538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.440 [2024-11-17 14:18:13.651556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.700 [2024-11-17 14:18:13.665962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.700 [2024-11-17 14:18:13.665982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.700 [2024-11-17 14:18:13.679856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.700 [2024-11-17 14:18:13.679874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.700 [2024-11-17 14:18:13.693911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.700 [2024-11-17 14:18:13.693930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.700 [2024-11-17 14:18:13.708404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.700 [2024-11-17 14:18:13.708423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.700 [2024-11-17 14:18:13.722613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.700 [2024-11-17 14:18:13.722631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.700 [2024-11-17 14:18:13.733729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.700 [2024-11-17 14:18:13.733747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.700 [2024-11-17 14:18:13.748461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.700 [2024-11-17 14:18:13.748479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.700 [2024-11-17 14:18:13.759877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.700 [2024-11-17 14:18:13.759895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.700 [2024-11-17 14:18:13.769634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.700 [2024-11-17 14:18:13.769651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.700 [2024-11-17 14:18:13.784158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.700 [2024-11-17 14:18:13.784176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.700 [2024-11-17 14:18:13.797738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.700 [2024-11-17 14:18:13.797756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.700 [2024-11-17 14:18:13.811685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.700 [2024-11-17 14:18:13.811704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.700 [2024-11-17 14:18:13.826334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.700 [2024-11-17 14:18:13.826360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.700 [2024-11-17 14:18:13.840365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.700 [2024-11-17 14:18:13.840384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.700 [2024-11-17 14:18:13.851652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.700 [2024-11-17 14:18:13.851670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.700 [2024-11-17 14:18:13.866366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.700 [2024-11-17 14:18:13.866384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.700 [2024-11-17 14:18:13.880595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.700 [2024-11-17 14:18:13.880614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.700 [2024-11-17 14:18:13.891474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.700 [2024-11-17 14:18:13.891496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.700 [2024-11-17 14:18:13.905946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.700 [2024-11-17 14:18:13.905964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.700 [2024-11-17 14:18:13.919696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.700 [2024-11-17 14:18:13.919714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.959 [2024-11-17 14:18:13.933912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.959 [2024-11-17 14:18:13.933931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.959 [2024-11-17 14:18:13.947997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.959 [2024-11-17 14:18:13.948016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.959 [2024-11-17 14:18:13.958415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.959 [2024-11-17 14:18:13.958433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.959 [2024-11-17 14:18:13.973023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.959 [2024-11-17 14:18:13.973042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.959 [2024-11-17 14:18:13.984236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.959 [2024-11-17 14:18:13.984254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.959 [2024-11-17 14:18:13.998950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.959 [2024-11-17 14:18:13.998969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.959 [2024-11-17 14:18:14.009791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.959 [2024-11-17 14:18:14.009811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.959 [2024-11-17 14:18:14.019359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.959 [2024-11-17 14:18:14.019377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.959 [2024-11-17 14:18:14.033607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.959 [2024-11-17 14:18:14.033626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.959 [2024-11-17 14:18:14.047384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.959 [2024-11-17 14:18:14.047403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.959 [2024-11-17 14:18:14.062018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.959 [2024-11-17 14:18:14.062037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.959 [2024-11-17 14:18:14.077754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.959 [2024-11-17 14:18:14.077774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.959 [2024-11-17 14:18:14.092053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.959 [2024-11-17 14:18:14.092072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.959 [2024-11-17 14:18:14.106658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.959 [2024-11-17 14:18:14.106678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.959 [2024-11-17 14:18:14.122530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.959 [2024-11-17 14:18:14.122549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.959 [2024-11-17 14:18:14.137077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.959 [2024-11-17 14:18:14.137096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.959 [2024-11-17 14:18:14.148324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.959 [2024-11-17 14:18:14.148346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.959 [2024-11-17 14:18:14.158028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.959 [2024-11-17 14:18:14.158047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.959 [2024-11-17 14:18:14.172528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.959 [2024-11-17 14:18:14.172547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.219 [2024-11-17 14:18:14.186360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.219 [2024-11-17 14:18:14.186379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.219 [2024-11-17 14:18:14.200452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.219 [2024-11-17 14:18:14.200472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.219 [2024-11-17 14:18:14.214441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.219 [2024-11-17 14:18:14.214461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.219 [2024-11-17 14:18:14.228835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.219 [2024-11-17 14:18:14.228855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.219 [2024-11-17 14:18:14.239704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.219 [2024-11-17 14:18:14.239723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.219 [2024-11-17 14:18:14.254487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.219 [2024-11-17 14:18:14.254506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.219 [2024-11-17 14:18:14.268438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.219 [2024-11-17 14:18:14.268458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.219 [2024-11-17 14:18:14.282334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.219 [2024-11-17 14:18:14.282359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.219 [2024-11-17 14:18:14.296845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.219 [2024-11-17 14:18:14.296864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.219 [2024-11-17 14:18:14.310876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.219 [2024-11-17 14:18:14.310894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.219 [2024-11-17 14:18:14.324978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.219 [2024-11-17 14:18:14.324997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.219 [2024-11-17 14:18:14.339515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.219 [2024-11-17 14:18:14.339534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.219 [2024-11-17 14:18:14.350716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.219 [2024-11-17 14:18:14.350735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.219 [2024-11-17 14:18:14.365412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.219 [2024-11-17 14:18:14.365430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.219 16479.50 IOPS, 128.75 MiB/s [2024-11-17T13:18:14.444Z] [2024-11-17 14:18:14.379748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.219 [2024-11-17 14:18:14.379766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.219 [2024-11-17 14:18:14.394970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.219 [2024-11-17 14:18:14.394989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.219 [2024-11-17 14:18:14.409214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.219 [2024-11-17 14:18:14.409233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.219 [2024-11-17 14:18:14.422990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.219 [2024-11-17 14:18:14.423008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.219 [2024-11-17 14:18:14.436971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.219 [2024-11-17 14:18:14.436990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.479 [2024-11-17 14:18:14.450412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.479 [2024-11-17 14:18:14.450432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.479 [2024-11-17 14:18:14.464771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.479 [2024-11-17 14:18:14.464789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.479 [2024-11-17 14:18:14.478901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.479 [2024-11-17 14:18:14.478919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.479 [2024-11-17 14:18:14.492973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.479 [2024-11-17 14:18:14.492991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.479 [2024-11-17 14:18:14.507294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.479 [2024-11-17 14:18:14.507313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.479 [2024-11-17 14:18:14.517908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.479 [2024-11-17 14:18:14.517926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.479 [2024-11-17 14:18:14.532876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.479 [2024-11-17 14:18:14.532894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.480 [2024-11-17 14:18:14.548451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.480 [2024-11-17 14:18:14.548469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.480 [2024-11-17 14:18:14.562221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.480 [2024-11-17 14:18:14.562239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.480 [2024-11-17 14:18:14.576114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.480 [2024-11-17 14:18:14.576132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.480 [2024-11-17 14:18:14.590083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.480 [2024-11-17 14:18:14.590100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.480 [2024-11-17 14:18:14.601288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.480 [2024-11-17 14:18:14.601306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.480 [2024-11-17 14:18:14.615880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.480 [2024-11-17 14:18:14.615899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.480 [2024-11-17 14:18:14.630036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.480 [2024-11-17 14:18:14.630055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.480 [2024-11-17 14:18:14.646220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.480 [2024-11-17 14:18:14.646238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.480 [2024-11-17 14:18:14.660321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.480 [2024-11-17 14:18:14.660339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.480 [2024-11-17 14:18:14.674689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.480 [2024-11-17 14:18:14.674708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.480 [2024-11-17 14:18:14.688590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.480 [2024-11-17 14:18:14.688609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.740 [2024-11-17 14:18:14.702680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.740 [2024-11-17 14:18:14.702699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.740 [2024-11-17 14:18:14.716731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.740 [2024-11-17 14:18:14.716750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.740 [2024-11-17 14:18:14.731008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.740 [2024-11-17 14:18:14.731027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.740 [2024-11-17 14:18:14.741925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.740 [2024-11-17 14:18:14.741943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.740 [2024-11-17 14:18:14.756328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.740 [2024-11-17 14:18:14.756346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.740 [2024-11-17 14:18:14.770180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.740 [2024-11-17 14:18:14.770198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.740 [2024-11-17 14:18:14.784455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.740 [2024-11-17 14:18:14.784474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.740 [2024-11-17 14:18:14.794901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.740 [2024-11-17 14:18:14.794919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.740 [2024-11-17 14:18:14.809265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.740 [2024-11-17 14:18:14.809283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.740 [2024-11-17 14:18:14.823327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.740 [2024-11-17 14:18:14.823346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.740 [2024-11-17 14:18:14.834343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.740 [2024-11-17 14:18:14.834369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.740 [2024-11-17 14:18:14.843777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.740 [2024-11-17 14:18:14.843796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.740 [2024-11-17 14:18:14.858337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.740 [2024-11-17 14:18:14.858362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.740 [2024-11-17 14:18:14.871840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.740 [2024-11-17 14:18:14.871858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.740 [2024-11-17 14:18:14.886577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.740 [2024-11-17 14:18:14.886595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.740 [2024-11-17 14:18:14.902317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.741 [2024-11-17 14:18:14.902336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.741 [2024-11-17 14:18:14.916928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.741 [2024-11-17 14:18:14.916946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.741 [2024-11-17 14:18:14.932335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.741 [2024-11-17 14:18:14.932360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.741 [2024-11-17 14:18:14.946511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.741 [2024-11-17 14:18:14.946530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.741 [2024-11-17 14:18:14.960523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.741 [2024-11-17 14:18:14.960541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.000 [2024-11-17 14:18:14.974549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.000 [2024-11-17 14:18:14.974568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.000 [2024-11-17 14:18:14.988895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.000 [2024-11-17 14:18:14.988913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.000 [2024-11-17 14:18:15.002776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.000 [2024-11-17 14:18:15.002795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.001 [2024-11-17 14:18:15.016592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.001 [2024-11-17 14:18:15.016611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.001 [2024-11-17 14:18:15.030794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.001 [2024-11-17 14:18:15.030813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.001 [2024-11-17 14:18:15.041842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.001 [2024-11-17 14:18:15.041860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.001 [2024-11-17 14:18:15.056496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.001 [2024-11-17 14:18:15.056514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.001 [2024-11-17 14:18:15.067428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.001 [2024-11-17 14:18:15.067446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.001 [2024-11-17 14:18:15.081993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.001 [2024-11-17 14:18:15.082011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.001 [2024-11-17 14:18:15.092837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.001 [2024-11-17 14:18:15.092856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.001 [2024-11-17 14:18:15.107592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.001 [2024-11-17 14:18:15.107611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.001 [2024-11-17 14:18:15.118466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.001 [2024-11-17 14:18:15.118485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.001 [2024-11-17 14:18:15.132734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.001 [2024-11-17 14:18:15.132752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.001 [2024-11-17 14:18:15.146033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.001 [2024-11-17 14:18:15.146051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.001 [2024-11-17 14:18:15.160408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.001 [2024-11-17 14:18:15.160427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.001 [2024-11-17 14:18:15.174154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.001 [2024-11-17 14:18:15.174178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.001 [2024-11-17 14:18:15.188550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.001 [2024-11-17 14:18:15.188569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.001 [2024-11-17 14:18:15.202546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.001 [2024-11-17 14:18:15.202565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.001 [2024-11-17 14:18:15.216934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.001 [2024-11-17 14:18:15.216952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.261 [2024-11-17 14:18:15.228193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.261 [2024-11-17 14:18:15.228212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.261 [2024-11-17 14:18:15.242238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.261 [2024-11-17 14:18:15.242256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.261 [2024-11-17 14:18:15.256397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.261 [2024-11-17 14:18:15.256415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.261 [2024-11-17 14:18:15.270515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.261 [2024-11-17 14:18:15.270533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.261 [2024-11-17 14:18:15.284577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.261 [2024-11-17 14:18:15.284595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.261 [2024-11-17 14:18:15.298867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.261 [2024-11-17 14:18:15.298886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.261 [2024-11-17 14:18:15.312570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.261 [2024-11-17 14:18:15.312588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.261 [2024-11-17 14:18:15.326953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.261 [2024-11-17 14:18:15.326971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.261 [2024-11-17 14:18:15.340578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.261 [2024-11-17 14:18:15.340596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.261 [2024-11-17 14:18:15.354849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.261 [2024-11-17 14:18:15.354868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.261 [2024-11-17 14:18:15.369112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.261 [2024-11-17 14:18:15.369132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.261 16503.67 IOPS, 128.93 MiB/s [2024-11-17T13:18:15.486Z] [2024-11-17 14:18:15.379703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.261 [2024-11-17 14:18:15.379721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.261 [2024-11-17 14:18:15.388521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.261 [2024-11-17 14:18:15.388539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.261 [2024-11-17 14:18:15.397868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.261 [2024-11-17 14:18:15.397886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.261 [2024-11-17 14:18:15.407238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.261 [2024-11-17 14:18:15.407256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.261 [2024-11-17 14:18:15.422167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.261 [2024-11-17 14:18:15.422190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.261 [2024-11-17 14:18:15.433199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.261 [2024-11-17 14:18:15.433217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.261 [2024-11-17 14:18:15.447498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.261 [2024-11-17 14:18:15.447518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.261 [2024-11-17 14:18:15.461552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.261 [2024-11-17 14:18:15.461572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.261 [2024-11-17 14:18:15.472122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.261 [2024-11-17 14:18:15.472141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.521 [2024-11-17 14:18:15.486877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.521 [2024-11-17 14:18:15.486897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.521 [2024-11-17 14:18:15.500781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.521 [2024-11-17 14:18:15.500800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.521 [2024-11-17 14:18:15.515316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.521 [2024-11-17 14:18:15.515334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.521 [2024-11-17 14:18:15.526567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.521 [2024-11-17 14:18:15.526595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.521 [2024-11-17 14:18:15.541057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.521 [2024-11-17 14:18:15.541076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.521 [2024-11-17 14:18:15.555157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.521 [2024-11-17 14:18:15.555177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.521 [2024-11-17 14:18:15.569049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.521 [2024-11-17 14:18:15.569068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.521 [2024-11-17 14:18:15.582909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.521 [2024-11-17 14:18:15.582927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.521 [2024-11-17 14:18:15.596952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.521 [2024-11-17 14:18:15.596971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.521 [2024-11-17 14:18:15.611524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.521 [2024-11-17 14:18:15.611542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.521 [2024-11-17 14:18:15.625812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.521 [2024-11-17 14:18:15.625830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.521 [2024-11-17 14:18:15.641010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.521 [2024-11-17 14:18:15.641029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.521 [2024-11-17 14:18:15.656423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.521 [2024-11-17 14:18:15.656442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.521 [2024-11-17 14:18:15.670759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.521 [2024-11-17 14:18:15.670777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.521 [2024-11-17 14:18:15.685096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.521 [2024-11-17 14:18:15.685119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.521 [2024-11-17 14:18:15.699573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.521 [2024-11-17 14:18:15.699594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.521 [2024-11-17 14:18:15.710511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.521 [2024-11-17 14:18:15.710531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.521 [2024-11-17 14:18:15.725366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.521 [2024-11-17 14:18:15.725383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.521 [2024-11-17 14:18:15.740581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.521 [2024-11-17 14:18:15.740601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.781 [2024-11-17 14:18:15.754992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.781 [2024-11-17 14:18:15.755011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.781 [2024-11-17 14:18:15.769106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.781 [2024-11-17 14:18:15.769126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.781 [2024-11-17 14:18:15.783647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.781 [2024-11-17 14:18:15.783666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.781 [2024-11-17 14:18:15.798923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.781 [2024-11-17 14:18:15.798943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.781 [2024-11-17 14:18:15.813367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.781 [2024-11-17 14:18:15.813386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.781 [2024-11-17 14:18:15.827534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.781 [2024-11-17 14:18:15.827553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.781 [2024-11-17 14:18:15.841584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.781 [2024-11-17 14:18:15.841602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.781 [2024-11-17 14:18:15.856217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.781 [2024-11-17 14:18:15.856236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.781 [2024-11-17 14:18:15.871910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.781 [2024-11-17 14:18:15.871929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.781 [2024-11-17 14:18:15.886423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.781 [2024-11-17 14:18:15.886442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.781 [2024-11-17 14:18:15.900593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.781 [2024-11-17 14:18:15.900611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.781 [2024-11-17 14:18:15.911566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.781 [2024-11-17 14:18:15.911583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.781 [2024-11-17 14:18:15.921204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.781 [2024-11-17 14:18:15.921222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.781 [2024-11-17 14:18:15.935631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.781 [2024-11-17 14:18:15.935650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.781 [2024-11-17 14:18:15.949094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.781 [2024-11-17 14:18:15.949112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.781 [2024-11-17 14:18:15.963281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.781 [2024-11-17 14:18:15.963300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.781 [2024-11-17 14:18:15.977336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.781 [2024-11-17 14:18:15.977359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.781 [2024-11-17 14:18:15.991619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.781 [2024-11-17 14:18:15.991638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.781 [2024-11-17 14:18:16.002197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.781 [2024-11-17 14:18:16.002216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.041 [2024-11-17 14:18:16.011945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.041 [2024-11-17 14:18:16.011964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.041 [2024-11-17 14:18:16.026372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.041 [2024-11-17 14:18:16.026391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.041 [2024-11-17 14:18:16.039805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.041 [2024-11-17 14:18:16.039822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.041 [2024-11-17 14:18:16.054330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.041 [2024-11-17 14:18:16.054348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.041 [2024-11-17 14:18:16.065046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.041 [2024-11-17 14:18:16.065065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.041 [2024-11-17 14:18:16.080012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.042 [2024-11-17 14:18:16.080031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.042 [2024-11-17 14:18:16.095597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.042 [2024-11-17 14:18:16.095615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.042 [2024-11-17 14:18:16.110128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.042 [2024-11-17 14:18:16.110147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.042 [2024-11-17 14:18:16.124284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.042 [2024-11-17 14:18:16.124302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.042 [2024-11-17 14:18:16.138741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.042 [2024-11-17 14:18:16.138759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.042 [2024-11-17 14:18:16.149668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.042 [2024-11-17 14:18:16.149686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.042 [2024-11-17 14:18:16.164452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.042 [2024-11-17 14:18:16.164472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.042 [2024-11-17 14:18:16.177916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.042 [2024-11-17 14:18:16.177935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.042 [2024-11-17 14:18:16.193061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.042 [2024-11-17 14:18:16.193080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.042 [2024-11-17 14:18:16.208508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.042 [2024-11-17 14:18:16.208527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.042 [2024-11-17 14:18:16.223059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.042 [2024-11-17 14:18:16.223078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.042 [2024-11-17 14:18:16.234268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.042 [2024-11-17 14:18:16.234287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.042 [2024-11-17 14:18:16.248971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.042 [2024-11-17 14:18:16.248990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.042 [2024-11-17 14:18:16.261811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.042 [2024-11-17 14:18:16.261830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.302 [2024-11-17 14:18:16.276306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.302 [2024-11-17 14:18:16.276325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.302 [2024-11-17 14:18:16.290385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.302 [2024-11-17 14:18:16.290404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.302 [2024-11-17 14:18:16.304881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.302 [2024-11-17 14:18:16.304899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.302 [2024-11-17 14:18:16.316199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.302 [2024-11-17 14:18:16.316217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.302 [2024-11-17 14:18:16.330804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.302 [2024-11-17 14:18:16.330822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.302 [2024-11-17 14:18:16.342031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.302 [2024-11-17 14:18:16.342049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.302 [2024-11-17 14:18:16.356694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.302 [2024-11-17 14:18:16.356712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.302 [2024-11-17 14:18:16.370720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.302 [2024-11-17 14:18:16.370739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.302 16475.50 IOPS, 128.71 MiB/s [2024-11-17T13:18:16.527Z] [2024-11-17 14:18:16.385217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.302 [2024-11-17 14:18:16.385236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.302 [2024-11-17 14:18:16.396423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.302 [2024-11-17 14:18:16.396442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.302 [2024-11-17 14:18:16.410857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.302 [2024-11-17 14:18:16.410875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.302 [2024-11-17 14:18:16.425063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.302 [2024-11-17 14:18:16.425081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.302 [2024-11-17 14:18:16.440285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.302 [2024-11-17 14:18:16.440303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.302 [2024-11-17 14:18:16.454866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.302 [2024-11-17 14:18:16.454889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.302 [2024-11-17 14:18:16.468566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.302 [2024-11-17 14:18:16.468585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.302 [2024-11-17 14:18:16.482814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.302 [2024-11-17 14:18:16.482832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.302 [2024-11-17 14:18:16.496922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.302 [2024-11-17 14:18:16.496940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.303 [2024-11-17 14:18:16.510958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.303 [2024-11-17 14:18:16.510976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.562 [2024-11-17 14:18:16.525137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.562 [2024-11-17 14:18:16.525156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.562 [2024-11-17 14:18:16.539804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.562 [2024-11-17 14:18:16.539822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.563 [2024-11-17 14:18:16.550511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.563 [2024-11-17 14:18:16.550529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.563 [2024-11-17 14:18:16.565337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.563 [2024-11-17 14:18:16.565361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.563 [2024-11-17 14:18:16.580862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.563 [2024-11-17 14:18:16.580880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.563 [2024-11-17 14:18:16.595539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.563 [2024-11-17 14:18:16.595557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.563 [2024-11-17 14:18:16.611003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.563 [2024-11-17 14:18:16.611021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.563 [2024-11-17 14:18:16.620653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.563 [2024-11-17 14:18:16.620670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.563 [2024-11-17 14:18:16.635152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.563 [2024-11-17 14:18:16.635170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.563 [2024-11-17 14:18:16.648985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.563 [2024-11-17 14:18:16.649004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.563 [2024-11-17 14:18:16.663406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.563 [2024-11-17 14:18:16.663425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.563 [2024-11-17 14:18:16.674218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.563 [2024-11-17 14:18:16.674236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.563 [2024-11-17 14:18:16.688791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.563 [2024-11-17 14:18:16.688809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.563 [2024-11-17 14:18:16.702755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.563 [2024-11-17 14:18:16.702775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.563 [2024-11-17 14:18:16.717271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.563 [2024-11-17 14:18:16.717294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.563 [2024-11-17 14:18:16.728296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.563 [2024-11-17 14:18:16.728315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.563 [2024-11-17 14:18:16.743068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.563 [2024-11-17 14:18:16.743087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.563 [2024-11-17 14:18:16.755884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.563 [2024-11-17 14:18:16.755902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.563 [2024-11-17 14:18:16.770436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.563 [2024-11-17 14:18:16.770454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.563 [2024-11-17 14:18:16.784436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.563 [2024-11-17 14:18:16.784454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.823 [2024-11-17 14:18:16.798333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.823 [2024-11-17 14:18:16.798358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.823 [2024-11-17 14:18:16.812611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.823 [2024-11-17 14:18:16.812629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.823 [2024-11-17 14:18:16.826478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.823 [2024-11-17 14:18:16.826497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.823 [2024-11-17 14:18:16.840796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.823 [2024-11-17 14:18:16.840815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.823 [2024-11-17 14:18:16.851735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.823 [2024-11-17 14:18:16.851755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.823 [2024-11-17 14:18:16.866134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.823 [2024-11-17 14:18:16.866154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.823 [2024-11-17 14:18:16.879855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.823 [2024-11-17 14:18:16.879876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.823 [2024-11-17 14:18:16.894553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.823 [2024-11-17 14:18:16.894572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.823 [2024-11-17 14:18:16.905573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.823 [2024-11-17 14:18:16.905592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.823 [2024-11-17 14:18:16.920614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.823 [2024-11-17 14:18:16.920633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.823 [2024-11-17 14:18:16.931587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.823 [2024-11-17 14:18:16.931616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.823 [2024-11-17 14:18:16.946272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.823 [2024-11-17 14:18:16.946291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.823 [2024-11-17 14:18:16.957216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.823 [2024-11-17 14:18:16.957235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.823 [2024-11-17 14:18:16.971388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.823 [2024-11-17 14:18:16.971415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.823 [2024-11-17 14:18:16.985289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.823 [2024-11-17 14:18:16.985308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.823 [2024-11-17 14:18:16.999314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.823 [2024-11-17 14:18:16.999333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.823 [2024-11-17 14:18:17.013551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.823 [2024-11-17 14:18:17.013570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.823 [2024-11-17 14:18:17.027527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.823 [2024-11-17 14:18:17.027546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.823 [2024-11-17 14:18:17.041819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.823 [2024-11-17 14:18:17.041839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.083 [2024-11-17 14:18:17.052861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.083 [2024-11-17 14:18:17.052880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.084 [2024-11-17 14:18:17.067398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.084 [2024-11-17 14:18:17.067417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.084 [2024-11-17 14:18:17.081507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.084 [2024-11-17 14:18:17.081526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.084 [2024-11-17 14:18:17.095850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.084 [2024-11-17 14:18:17.095869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.084 [2024-11-17 14:18:17.105722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.084 [2024-11-17 14:18:17.105741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.084 [2024-11-17 14:18:17.120420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.084 [2024-11-17 14:18:17.120439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.084 [2024-11-17 14:18:17.135639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.084 [2024-11-17 14:18:17.135659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.084 [2024-11-17 14:18:17.149959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.084 [2024-11-17 14:18:17.149978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.084 [2024-11-17 14:18:17.164470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.084 [2024-11-17 14:18:17.164489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.084 [2024-11-17 14:18:17.174995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.084 [2024-11-17 14:18:17.175014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.084 [2024-11-17 14:18:17.189311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.084 [2024-11-17 14:18:17.189331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.084 [2024-11-17 14:18:17.203526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.084 [2024-11-17 14:18:17.203545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.084 [2024-11-17 14:18:17.218001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.084 [2024-11-17 14:18:17.218020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.084 [2024-11-17 14:18:17.229557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.084 [2024-11-17 14:18:17.229580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.084 [2024-11-17 14:18:17.244091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.084 [2024-11-17 14:18:17.244110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.084 [2024-11-17 14:18:17.258279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.084 [2024-11-17 14:18:17.258298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.084 [2024-11-17 14:18:17.269540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.084 [2024-11-17 14:18:17.269559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.084 [2024-11-17 14:18:17.284455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.084 [2024-11-17 14:18:17.284473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.084 [2024-11-17 14:18:17.300387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.084 [2024-11-17 14:18:17.300406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.344 [2024-11-17 14:18:17.314239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.344 [2024-11-17 14:18:17.314258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.344 [2024-11-17 14:18:17.328716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.344 [2024-11-17 14:18:17.328734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.344 [2024-11-17 14:18:17.342839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.344 [2024-11-17 14:18:17.342858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.344 [2024-11-17 14:18:17.353716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.344 [2024-11-17 14:18:17.353734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.344 [2024-11-17 14:18:17.368433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.344 [2024-11-17 14:18:17.368452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.344 16471.40 IOPS, 128.68 MiB/s [2024-11-17T13:18:17.569Z] [2024-11-17 14:18:17.380082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.344 [2024-11-17 14:18:17.380100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.344 00:08:28.344 Latency(us) 00:08:28.344 [2024-11-17T13:18:17.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.344 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:28.344 Nvme1n1 : 5.01 16474.36 128.71 0.00 0.00 7761.88 3219.81 14702.86 00:08:28.344 [2024-11-17T13:18:17.569Z] =================================================================================================================== 00:08:28.344 [2024-11-17T13:18:17.569Z] Total : 16474.36 128.71 0.00 0.00 7761.88 3219.81 14702.86 00:08:28.344 [2024-11-17 14:18:17.390882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.344 [2024-11-17 14:18:17.390897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.344 [2024-11-17 14:18:17.402915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.344 [2024-11-17 14:18:17.402929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.344 [2024-11-17 14:18:17.414956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.344 [2024-11-17 14:18:17.414974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.344 [2024-11-17 14:18:17.426983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.344 [2024-11-17 14:18:17.426997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.344 [2024-11-17 14:18:17.439017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.344 [2024-11-17 14:18:17.439032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.344 [2024-11-17 14:18:17.451044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.344 [2024-11-17 14:18:17.451058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.344 [2024-11-17 14:18:17.463074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.344 [2024-11-17 14:18:17.463088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.344 [2024-11-17 14:18:17.475102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.344 [2024-11-17 14:18:17.475115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.344 [2024-11-17 14:18:17.487135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.344 [2024-11-17 14:18:17.487149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.344 [2024-11-17 14:18:17.499163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.344 [2024-11-17 14:18:17.499173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.344 [2024-11-17 14:18:17.511201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.344 [2024-11-17 14:18:17.511213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.344 [2024-11-17 14:18:17.523231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.344 [2024-11-17 14:18:17.523242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.344 [2024-11-17 14:18:17.535263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.344 [2024-11-17 14:18:17.535274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1339239) - No such process 00:08:28.344 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1339239 00:08:28.344 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.344 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.344 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:28.344 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.344 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:28.344 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.344 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:28.344 delay0 00:08:28.344 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.344 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:28.345 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.345 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:28.604 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.604 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:28.604 [2024-11-17 14:18:17.640442] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:35.180 Initializing NVMe Controllers 00:08:35.180 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:35.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:35.180 Initialization complete. Launching workers. 00:08:35.180 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 89 00:08:35.180 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 376, failed to submit 33 00:08:35.180 success 176, unsuccessful 200, failed 0 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:35.180 rmmod nvme_tcp 00:08:35.180 rmmod nvme_fabrics 00:08:35.180 rmmod nvme_keyring 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1336928 ']' 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1336928 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1336928 ']' 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1336928 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1336928 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1336928' 00:08:35.180 killing process with pid 1336928 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1336928 00:08:35.180 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1336928 00:08:35.180 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:35.180 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:35.180 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:35.180 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:35.180 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:35.180 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:35.180 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:35.180 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.180 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:35.180 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.180 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.180 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.092 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:37.092 00:08:37.092 real 0m31.364s 00:08:37.092 user 0m42.071s 00:08:37.092 sys 0m10.829s 00:08:37.092 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.092 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:37.092 ************************************ 00:08:37.092 END TEST nvmf_zcopy 00:08:37.092 ************************************ 00:08:37.092 14:18:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:37.092 14:18:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:37.092 14:18:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.092 14:18:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.092 ************************************ 00:08:37.092 START TEST nvmf_nmic 00:08:37.092 ************************************ 00:08:37.092 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:37.092 * Looking for test storage... 00:08:37.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.092 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:37.092 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:37.092 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:37.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.353 --rc genhtml_branch_coverage=1 00:08:37.353 --rc genhtml_function_coverage=1 00:08:37.353 --rc genhtml_legend=1 00:08:37.353 --rc geninfo_all_blocks=1 00:08:37.353 --rc geninfo_unexecuted_blocks=1 00:08:37.353 00:08:37.353 ' 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:37.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.353 --rc genhtml_branch_coverage=1 00:08:37.353 --rc genhtml_function_coverage=1 00:08:37.353 --rc genhtml_legend=1 00:08:37.353 --rc geninfo_all_blocks=1 00:08:37.353 --rc geninfo_unexecuted_blocks=1 00:08:37.353 00:08:37.353 ' 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:37.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.353 --rc genhtml_branch_coverage=1 00:08:37.353 --rc genhtml_function_coverage=1 00:08:37.353 --rc genhtml_legend=1 00:08:37.353 --rc geninfo_all_blocks=1 00:08:37.353 --rc geninfo_unexecuted_blocks=1 00:08:37.353 00:08:37.353 ' 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:37.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.353 --rc genhtml_branch_coverage=1 00:08:37.353 --rc genhtml_function_coverage=1 00:08:37.353 --rc genhtml_legend=1 00:08:37.353 --rc geninfo_all_blocks=1 00:08:37.353 --rc geninfo_unexecuted_blocks=1 00:08:37.353 00:08:37.353 ' 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.353 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:37.354 14:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:43.936 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.936 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:43.937 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:43.937 Found net devices under 0000:86:00.0: cvl_0_0 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:43.937 Found net devices under 0000:86:00.1: cvl_0_1 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:43.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:08:43.937 00:08:43.937 --- 10.0.0.2 ping statistics --- 00:08:43.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.937 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:43.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:08:43.937 00:08:43.937 --- 10.0.0.1 ping statistics --- 00:08:43.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.937 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1344710 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1344710 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1344710 ']' 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:43.937 [2024-11-17 14:18:32.388508] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:08:43.937 [2024-11-17 14:18:32.388555] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.937 [2024-11-17 14:18:32.468022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.937 [2024-11-17 14:18:32.511973] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.937 [2024-11-17 14:18:32.512010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.937 [2024-11-17 14:18:32.512017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.937 [2024-11-17 14:18:32.512024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.937 [2024-11-17 14:18:32.512029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.937 [2024-11-17 14:18:32.513637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.937 [2024-11-17 14:18:32.513669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.937 [2024-11-17 14:18:32.513780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.937 [2024-11-17 14:18:32.513780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:43.937 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:43.938 [2024-11-17 14:18:32.651498] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:43.938 Malloc0 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:43.938 [2024-11-17 14:18:32.727648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:43.938 test case1: single bdev can't be used in multiple subsystems 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:43.938 [2024-11-17 14:18:32.755551] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:43.938 [2024-11-17 14:18:32.755570] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:43.938 [2024-11-17 14:18:32.755578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.938 request: 00:08:43.938 { 00:08:43.938 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:43.938 "namespace": { 00:08:43.938 "bdev_name": "Malloc0", 00:08:43.938 "no_auto_visible": false 00:08:43.938 }, 00:08:43.938 "method": "nvmf_subsystem_add_ns", 00:08:43.938 "req_id": 1 00:08:43.938 } 00:08:43.938 Got JSON-RPC error response 00:08:43.938 response: 00:08:43.938 { 00:08:43.938 "code": -32602, 00:08:43.938 "message": "Invalid parameters" 00:08:43.938 } 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:43.938 Adding namespace failed - expected result. 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:43.938 test case2: host connect to nvmf target in multiple paths 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:43.938 [2024-11-17 14:18:32.767705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.938 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:44.878 14:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:46.260 14:18:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:46.260 14:18:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:46.260 14:18:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:46.260 14:18:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:46.260 14:18:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:48.169 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:48.169 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:48.169 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:48.169 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:48.169 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:48.169 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:48.169 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:48.169 [global] 00:08:48.169 thread=1 00:08:48.169 invalidate=1 00:08:48.169 rw=write 00:08:48.169 time_based=1 00:08:48.169 runtime=1 00:08:48.169 ioengine=libaio 00:08:48.169 direct=1 00:08:48.169 bs=4096 00:08:48.169 iodepth=1 00:08:48.169 norandommap=0 00:08:48.169 numjobs=1 00:08:48.169 00:08:48.169 verify_dump=1 00:08:48.169 verify_backlog=512 00:08:48.169 verify_state_save=0 00:08:48.169 do_verify=1 00:08:48.169 verify=crc32c-intel 00:08:48.169 [job0] 00:08:48.169 filename=/dev/nvme0n1 00:08:48.169 Could not set queue depth (nvme0n1) 00:08:48.426 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:48.426 fio-3.35 00:08:48.426 Starting 1 thread 00:08:49.364 00:08:49.364 job0: (groupid=0, jobs=1): err= 0: pid=1345689: Sun Nov 17 14:18:38 2024 00:08:49.364 read: IOPS=1575, BW=6302KiB/s (6454kB/s)(6504KiB/1032msec) 00:08:49.364 slat (nsec): min=5377, max=22904, avg=6921.38, stdev=923.05 00:08:49.364 clat (usec): min=170, max=41023, avg=423.56, stdev=2852.82 00:08:49.364 lat (usec): min=183, max=41029, avg=430.48, stdev=2852.76 00:08:49.364 clat percentiles (usec): 00:08:49.364 | 1.00th=[ 188], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 217], 00:08:49.364 | 30.00th=[ 219], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 225], 00:08:49.364 | 70.00th=[ 227], 80.00th=[ 231], 90.00th=[ 235], 95.00th=[ 243], 00:08:49.364 | 99.00th=[ 265], 99.50th=[ 420], 99.90th=[41157], 99.95th=[41157], 00:08:49.364 | 99.99th=[41157] 00:08:49.364 write: IOPS=1984, BW=7938KiB/s (8128kB/s)(8192KiB/1032msec); 0 zone resets 00:08:49.364 slat (nsec): min=9083, max=51178, avg=10100.55, stdev=1372.26 00:08:49.364 clat (usec): min=112, max=366, avg=148.57, stdev=23.07 00:08:49.364 lat (usec): min=123, max=417, avg=158.67, stdev=23.32 00:08:49.364 clat percentiles (usec): 00:08:49.364 | 1.00th=[ 122], 5.00th=[ 125], 10.00th=[ 126], 20.00th=[ 129], 00:08:49.364 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 157], 00:08:49.364 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 186], 00:08:49.364 | 99.00th=[ 215], 99.50th=[ 219], 99.90th=[ 227], 99.95th=[ 293], 00:08:49.364 | 99.99th=[ 367] 00:08:49.364 bw ( KiB/s): min= 4096, max=12288, per=100.00%, avg=8192.00, stdev=5792.62, samples=2 00:08:49.364 iops : min= 1024, max= 3072, avg=2048.00, stdev=1448.15, samples=2 00:08:49.364 lat (usec) : 250=98.58%, 500=1.20% 00:08:49.364 lat (msec) : 50=0.22% 00:08:49.364 cpu : usr=1.55%, sys=3.20%, ctx=3674, majf=0, minf=1 00:08:49.364 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:49.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.364 issued rwts: total=1626,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.364 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:49.364 00:08:49.364 Run status group 0 (all jobs): 00:08:49.364 READ: bw=6302KiB/s (6454kB/s), 6302KiB/s-6302KiB/s (6454kB/s-6454kB/s), io=6504KiB (6660kB), run=1032-1032msec 00:08:49.364 WRITE: bw=7938KiB/s (8128kB/s), 7938KiB/s-7938KiB/s (8128kB/s-8128kB/s), io=8192KiB (8389kB), run=1032-1032msec 00:08:49.364 00:08:49.364 Disk stats (read/write): 00:08:49.364 nvme0n1: ios=1671/2048, merge=0/0, ticks=532/290, in_queue=822, util=91.18% 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:49.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:49.624 rmmod nvme_tcp 00:08:49.624 rmmod nvme_fabrics 00:08:49.624 rmmod nvme_keyring 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1344710 ']' 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1344710 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1344710 ']' 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1344710 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.624 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1344710 00:08:49.884 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:49.884 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:49.884 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1344710' 00:08:49.884 killing process with pid 1344710 00:08:49.884 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1344710 00:08:49.884 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1344710 00:08:49.884 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:49.884 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:49.884 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:49.884 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:49.884 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:49.884 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:49.884 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:49.884 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:49.884 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:49.884 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.884 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.884 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:52.426 00:08:52.426 real 0m14.924s 00:08:52.426 user 0m33.135s 00:08:52.426 sys 0m5.283s 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:52.426 ************************************ 00:08:52.426 END TEST nvmf_nmic 00:08:52.426 ************************************ 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:52.426 ************************************ 00:08:52.426 START TEST nvmf_fio_target 00:08:52.426 ************************************ 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:52.426 * Looking for test storage... 00:08:52.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:52.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.426 --rc genhtml_branch_coverage=1 00:08:52.426 --rc genhtml_function_coverage=1 00:08:52.426 --rc genhtml_legend=1 00:08:52.426 --rc geninfo_all_blocks=1 00:08:52.426 --rc geninfo_unexecuted_blocks=1 00:08:52.426 00:08:52.426 ' 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:52.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.426 --rc genhtml_branch_coverage=1 00:08:52.426 --rc genhtml_function_coverage=1 00:08:52.426 --rc genhtml_legend=1 00:08:52.426 --rc geninfo_all_blocks=1 00:08:52.426 --rc geninfo_unexecuted_blocks=1 00:08:52.426 00:08:52.426 ' 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:52.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.426 --rc genhtml_branch_coverage=1 00:08:52.426 --rc genhtml_function_coverage=1 00:08:52.426 --rc genhtml_legend=1 00:08:52.426 --rc geninfo_all_blocks=1 00:08:52.426 --rc geninfo_unexecuted_blocks=1 00:08:52.426 00:08:52.426 ' 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:52.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.426 --rc genhtml_branch_coverage=1 00:08:52.426 --rc genhtml_function_coverage=1 00:08:52.426 --rc genhtml_legend=1 00:08:52.426 --rc geninfo_all_blocks=1 00:08:52.426 --rc geninfo_unexecuted_blocks=1 00:08:52.426 00:08:52.426 ' 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:52.426 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:52.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:52.427 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:59.009 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:59.010 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:59.010 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:59.010 Found net devices under 0000:86:00.0: cvl_0_0 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:59.010 Found net devices under 0000:86:00.1: cvl_0_1 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:59.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:08:59.010 00:08:59.010 --- 10.0.0.2 ping statistics --- 00:08:59.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.010 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:59.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:08:59.010 00:08:59.010 --- 10.0.0.1 ping statistics --- 00:08:59.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.010 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1349458 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1349458 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1349458 ']' 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.010 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:59.010 [2024-11-17 14:18:47.449414] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:08:59.010 [2024-11-17 14:18:47.449458] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.010 [2024-11-17 14:18:47.531189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.010 [2024-11-17 14:18:47.573657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.010 [2024-11-17 14:18:47.573693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.010 [2024-11-17 14:18:47.573700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.010 [2024-11-17 14:18:47.573706] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.011 [2024-11-17 14:18:47.573711] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.011 [2024-11-17 14:18:47.575407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.011 [2024-11-17 14:18:47.575289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.011 [2024-11-17 14:18:47.575307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.011 [2024-11-17 14:18:47.575407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.011 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.011 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:59.011 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:59.011 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:59.011 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:59.011 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.011 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:59.011 [2024-11-17 14:18:47.897760] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.011 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:59.011 14:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:59.011 14:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:59.270 14:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:59.270 14:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:59.531 14:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:59.531 14:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:59.791 14:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:59.791 14:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:59.791 14:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:00.052 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:00.052 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:00.322 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:00.322 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:00.593 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:00.593 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:00.900 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:00.900 14:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:00.900 14:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:01.195 14:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:01.195 14:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:01.497 14:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.497 [2024-11-17 14:18:50.637763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.497 14:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:01.769 14:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:02.028 14:18:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:02.967 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:02.967 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:02.967 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:02.967 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:02.967 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:02.967 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:05.506 14:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:05.506 14:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:05.506 14:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:05.506 14:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:05.506 14:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:05.506 14:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:05.506 14:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:05.506 [global] 00:09:05.506 thread=1 00:09:05.506 invalidate=1 00:09:05.506 rw=write 00:09:05.506 time_based=1 00:09:05.506 runtime=1 00:09:05.506 ioengine=libaio 00:09:05.506 direct=1 00:09:05.506 bs=4096 00:09:05.506 iodepth=1 00:09:05.506 norandommap=0 00:09:05.506 numjobs=1 00:09:05.506 00:09:05.506 verify_dump=1 00:09:05.506 verify_backlog=512 00:09:05.506 verify_state_save=0 00:09:05.506 do_verify=1 00:09:05.506 verify=crc32c-intel 00:09:05.506 [job0] 00:09:05.506 filename=/dev/nvme0n1 00:09:05.506 [job1] 00:09:05.506 filename=/dev/nvme0n2 00:09:05.506 [job2] 00:09:05.506 filename=/dev/nvme0n3 00:09:05.506 [job3] 00:09:05.506 filename=/dev/nvme0n4 00:09:05.506 Could not set queue depth (nvme0n1) 00:09:05.506 Could not set queue depth (nvme0n2) 00:09:05.506 Could not set queue depth (nvme0n3) 00:09:05.506 Could not set queue depth (nvme0n4) 00:09:05.506 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:05.506 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:05.506 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:05.506 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:05.506 fio-3.35 00:09:05.506 Starting 4 threads 00:09:06.911 00:09:06.911 job0: (groupid=0, jobs=1): err= 0: pid=1350818: Sun Nov 17 14:18:55 2024 00:09:06.911 read: IOPS=996, BW=3985KiB/s (4080kB/s)(4144KiB/1040msec) 00:09:06.911 slat (nsec): min=3279, max=28569, avg=8432.71, stdev=1811.07 00:09:06.911 clat (usec): min=185, max=41993, avg=723.91, stdev=4396.24 00:09:06.911 lat (usec): min=194, max=42016, avg=732.34, stdev=4397.45 00:09:06.911 clat percentiles (usec): 00:09:06.911 | 1.00th=[ 204], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 237], 00:09:06.911 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:09:06.911 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 277], 00:09:06.911 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:06.911 | 99.99th=[42206] 00:09:06.911 write: IOPS=1476, BW=5908KiB/s (6049kB/s)(6144KiB/1040msec); 0 zone resets 00:09:06.911 slat (nsec): min=3100, max=41013, avg=9454.19, stdev=3697.91 00:09:06.911 clat (usec): min=109, max=308, avg=169.30, stdev=28.59 00:09:06.911 lat (usec): min=114, max=346, avg=178.75, stdev=29.33 00:09:06.911 clat percentiles (usec): 00:09:06.912 | 1.00th=[ 128], 5.00th=[ 137], 10.00th=[ 143], 20.00th=[ 147], 00:09:06.912 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 165], 00:09:06.912 | 70.00th=[ 182], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 221], 00:09:06.912 | 99.00th=[ 251], 99.50th=[ 285], 99.90th=[ 306], 99.95th=[ 310], 00:09:06.912 | 99.99th=[ 310] 00:09:06.912 bw ( KiB/s): min= 2112, max=10176, per=31.20%, avg=6144.00, stdev=5702.11, samples=2 00:09:06.912 iops : min= 528, max= 2544, avg=1536.00, stdev=1425.53, samples=2 00:09:06.912 lat (usec) : 250=84.06%, 500=15.47% 00:09:06.912 lat (msec) : 50=0.47% 00:09:06.912 cpu : usr=1.83%, sys=1.83%, ctx=2572, majf=0, minf=1 00:09:06.912 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.912 issued rwts: total=1036,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.912 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.912 job1: (groupid=0, jobs=1): err= 0: pid=1350819: Sun Nov 17 14:18:55 2024 00:09:06.912 read: IOPS=36, BW=147KiB/s (150kB/s)(148KiB/1010msec) 00:09:06.912 slat (nsec): min=5283, max=27438, avg=17400.84, stdev=7137.39 00:09:06.912 clat (usec): min=258, max=41400, avg=24740.94, stdev=20005.99 00:09:06.912 lat (usec): min=271, max=41408, avg=24758.34, stdev=20011.22 00:09:06.912 clat percentiles (usec): 00:09:06.912 | 1.00th=[ 260], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[ 338], 00:09:06.912 | 30.00th=[ 375], 40.00th=[ 8029], 50.00th=[41157], 60.00th=[41157], 00:09:06.912 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:06.912 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:06.912 | 99.99th=[41157] 00:09:06.912 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:09:06.912 slat (nsec): min=10688, max=42473, avg=12138.89, stdev=2249.00 00:09:06.912 clat (usec): min=137, max=293, avg=164.96, stdev=14.46 00:09:06.912 lat (usec): min=148, max=304, avg=177.10, stdev=15.23 00:09:06.912 clat percentiles (usec): 00:09:06.912 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:09:06.912 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:09:06.912 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 180], 95.00th=[ 186], 00:09:06.912 | 99.00th=[ 202], 99.50th=[ 235], 99.90th=[ 293], 99.95th=[ 293], 00:09:06.912 | 99.99th=[ 293] 00:09:06.912 bw ( KiB/s): min= 4096, max= 4096, per=20.80%, avg=4096.00, stdev= 0.00, samples=1 00:09:06.912 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:06.912 lat (usec) : 250=92.90%, 500=2.73% 00:09:06.912 lat (msec) : 2=0.18%, 10=0.18%, 50=4.01% 00:09:06.912 cpu : usr=0.50%, sys=0.89%, ctx=551, majf=0, minf=1 00:09:06.912 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.912 issued rwts: total=37,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.912 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.912 job2: (groupid=0, jobs=1): err= 0: pid=1350820: Sun Nov 17 14:18:55 2024 00:09:06.912 read: IOPS=22, BW=91.7KiB/s (93.9kB/s)(92.0KiB/1003msec) 00:09:06.912 slat (nsec): min=10314, max=24048, avg=21964.96, stdev=3637.72 00:09:06.912 clat (usec): min=239, max=41998, avg=39354.53, stdev=8536.30 00:09:06.912 lat (usec): min=263, max=42023, avg=39376.50, stdev=8536.06 00:09:06.912 clat percentiles (usec): 00:09:06.912 | 1.00th=[ 241], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:06.912 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:06.912 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:09:06.912 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:06.912 | 99.99th=[42206] 00:09:06.912 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:09:06.912 slat (nsec): min=11077, max=60241, avg=12315.82, stdev=2657.75 00:09:06.912 clat (usec): min=142, max=319, avg=173.90, stdev=14.88 00:09:06.912 lat (usec): min=154, max=379, avg=186.21, stdev=16.02 00:09:06.912 clat percentiles (usec): 00:09:06.912 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:09:06.912 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:09:06.912 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 198], 00:09:06.912 | 99.00th=[ 217], 99.50th=[ 227], 99.90th=[ 318], 99.95th=[ 318], 00:09:06.912 | 99.99th=[ 318] 00:09:06.912 bw ( KiB/s): min= 4096, max= 4096, per=20.80%, avg=4096.00, stdev= 0.00, samples=1 00:09:06.912 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:06.912 lat (usec) : 250=95.70%, 500=0.19% 00:09:06.912 lat (msec) : 50=4.11% 00:09:06.912 cpu : usr=0.40%, sys=1.00%, ctx=536, majf=0, minf=1 00:09:06.912 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.912 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.912 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.912 job3: (groupid=0, jobs=1): err= 0: pid=1350821: Sun Nov 17 14:18:55 2024 00:09:06.912 read: IOPS=2154, BW=8619KiB/s (8826kB/s)(8628KiB/1001msec) 00:09:06.912 slat (nsec): min=6151, max=27861, avg=8378.85, stdev=1998.91 00:09:06.912 clat (usec): min=177, max=919, avg=249.98, stdev=46.52 00:09:06.912 lat (usec): min=184, max=942, avg=258.36, stdev=46.77 00:09:06.912 clat percentiles (usec): 00:09:06.912 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 202], 20.00th=[ 233], 00:09:06.912 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:09:06.912 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 285], 00:09:06.912 | 99.00th=[ 474], 99.50th=[ 494], 99.90th=[ 668], 99.95th=[ 775], 00:09:06.912 | 99.99th=[ 922] 00:09:06.912 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:06.912 slat (nsec): min=9859, max=59341, avg=12244.20, stdev=2896.73 00:09:06.912 clat (usec): min=113, max=514, avg=155.80, stdev=26.28 00:09:06.912 lat (usec): min=126, max=544, avg=168.04, stdev=27.31 00:09:06.912 clat percentiles (usec): 00:09:06.912 | 1.00th=[ 123], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 139], 00:09:06.912 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 153], 00:09:06.912 | 70.00th=[ 159], 80.00th=[ 172], 90.00th=[ 196], 95.00th=[ 208], 00:09:06.912 | 99.00th=[ 235], 99.50th=[ 253], 99.90th=[ 330], 99.95th=[ 334], 00:09:06.912 | 99.99th=[ 515] 00:09:06.912 bw ( KiB/s): min= 9456, max= 9456, per=48.02%, avg=9456.00, stdev= 0.00, samples=1 00:09:06.912 iops : min= 2364, max= 2364, avg=2364.00, stdev= 0.00, samples=1 00:09:06.912 lat (usec) : 250=77.78%, 500=22.03%, 750=0.15%, 1000=0.04% 00:09:06.912 cpu : usr=3.20%, sys=4.30%, ctx=4719, majf=0, minf=1 00:09:06.912 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.912 issued rwts: total=2157,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.912 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.912 00:09:06.912 Run status group 0 (all jobs): 00:09:06.912 READ: bw=12.2MiB/s (12.8MB/s), 91.7KiB/s-8619KiB/s (93.9kB/s-8826kB/s), io=12.7MiB (13.3MB), run=1001-1040msec 00:09:06.912 WRITE: bw=19.2MiB/s (20.2MB/s), 2028KiB/s-9.99MiB/s (2076kB/s-10.5MB/s), io=20.0MiB (21.0MB), run=1001-1040msec 00:09:06.912 00:09:06.912 Disk stats (read/write): 00:09:06.912 nvme0n1: ios=1080/1536, merge=0/0, ticks=522/251, in_queue=773, util=82.06% 00:09:06.912 nvme0n2: ios=82/512, merge=0/0, ticks=1346/76, in_queue=1422, util=97.53% 00:09:06.912 nvme0n3: ios=18/512, merge=0/0, ticks=699/90, in_queue=789, util=87.60% 00:09:06.912 nvme0n4: ios=1756/2048, merge=0/0, ticks=1015/303, in_queue=1318, util=97.56% 00:09:06.912 14:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:06.912 [global] 00:09:06.912 thread=1 00:09:06.912 invalidate=1 00:09:06.912 rw=randwrite 00:09:06.912 time_based=1 00:09:06.912 runtime=1 00:09:06.912 ioengine=libaio 00:09:06.912 direct=1 00:09:06.912 bs=4096 00:09:06.912 iodepth=1 00:09:06.912 norandommap=0 00:09:06.912 numjobs=1 00:09:06.912 00:09:06.912 verify_dump=1 00:09:06.912 verify_backlog=512 00:09:06.912 verify_state_save=0 00:09:06.912 do_verify=1 00:09:06.912 verify=crc32c-intel 00:09:06.912 [job0] 00:09:06.912 filename=/dev/nvme0n1 00:09:06.912 [job1] 00:09:06.912 filename=/dev/nvme0n2 00:09:06.912 [job2] 00:09:06.912 filename=/dev/nvme0n3 00:09:06.912 [job3] 00:09:06.912 filename=/dev/nvme0n4 00:09:06.912 Could not set queue depth (nvme0n1) 00:09:06.912 Could not set queue depth (nvme0n2) 00:09:06.912 Could not set queue depth (nvme0n3) 00:09:06.912 Could not set queue depth (nvme0n4) 00:09:07.175 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:07.175 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:07.175 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:07.175 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:07.175 fio-3.35 00:09:07.175 Starting 4 threads 00:09:08.555 00:09:08.555 job0: (groupid=0, jobs=1): err= 0: pid=1351219: Sun Nov 17 14:18:57 2024 00:09:08.555 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:09:08.555 slat (nsec): min=9840, max=29453, avg=21307.91, stdev=3157.67 00:09:08.555 clat (usec): min=40414, max=41893, avg=40988.70, stdev=247.14 00:09:08.555 lat (usec): min=40424, max=41922, avg=41010.01, stdev=249.75 00:09:08.555 clat percentiles (usec): 00:09:08.555 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:08.555 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:08.555 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:08.555 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:08.555 | 99.99th=[41681] 00:09:08.555 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:09:08.555 slat (nsec): min=9677, max=49311, avg=10993.34, stdev=2110.57 00:09:08.555 clat (usec): min=136, max=306, avg=183.40, stdev=15.44 00:09:08.555 lat (usec): min=146, max=355, avg=194.40, stdev=16.22 00:09:08.555 clat percentiles (usec): 00:09:08.555 | 1.00th=[ 149], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:09:08.555 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 186], 00:09:08.555 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 208], 00:09:08.555 | 99.00th=[ 221], 99.50th=[ 231], 99.90th=[ 306], 99.95th=[ 306], 00:09:08.555 | 99.99th=[ 306] 00:09:08.555 bw ( KiB/s): min= 4096, max= 4096, per=23.84%, avg=4096.00, stdev= 0.00, samples=1 00:09:08.555 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:08.555 lat (usec) : 250=95.69%, 500=0.19% 00:09:08.555 lat (msec) : 50=4.12% 00:09:08.555 cpu : usr=0.50%, sys=0.80%, ctx=534, majf=0, minf=1 00:09:08.555 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.555 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.555 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.555 job1: (groupid=0, jobs=1): err= 0: pid=1351233: Sun Nov 17 14:18:57 2024 00:09:08.555 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:08.555 slat (nsec): min=7244, max=37990, avg=8699.85, stdev=1734.53 00:09:08.555 clat (usec): min=173, max=41359, avg=701.10, stdev=4204.14 00:09:08.555 lat (usec): min=182, max=41367, avg=709.80, stdev=4204.39 00:09:08.555 clat percentiles (usec): 00:09:08.555 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 208], 00:09:08.555 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 241], 60.00th=[ 251], 00:09:08.555 | 70.00th=[ 269], 80.00th=[ 367], 90.00th=[ 371], 95.00th=[ 375], 00:09:08.555 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:08.555 | 99.99th=[41157] 00:09:08.555 write: IOPS=1406, BW=5626KiB/s (5761kB/s)(5632KiB/1001msec); 0 zone resets 00:09:08.555 slat (nsec): min=10440, max=42134, avg=12854.27, stdev=2313.46 00:09:08.555 clat (usec): min=119, max=352, avg=175.83, stdev=37.12 00:09:08.555 lat (usec): min=130, max=394, avg=188.68, stdev=37.28 00:09:08.555 clat percentiles (usec): 00:09:08.555 | 1.00th=[ 125], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 143], 00:09:08.555 | 30.00th=[ 151], 40.00th=[ 161], 50.00th=[ 169], 60.00th=[ 178], 00:09:08.555 | 70.00th=[ 186], 80.00th=[ 206], 90.00th=[ 241], 95.00th=[ 243], 00:09:08.555 | 99.00th=[ 247], 99.50th=[ 249], 99.90th=[ 293], 99.95th=[ 355], 00:09:08.555 | 99.99th=[ 355] 00:09:08.555 bw ( KiB/s): min= 4096, max= 4096, per=23.84%, avg=4096.00, stdev= 0.00, samples=1 00:09:08.555 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:08.555 lat (usec) : 250=82.32%, 500=17.23% 00:09:08.555 lat (msec) : 50=0.45% 00:09:08.555 cpu : usr=1.90%, sys=4.30%, ctx=2436, majf=0, minf=1 00:09:08.555 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.555 issued rwts: total=1024,1408,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.555 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.555 job2: (groupid=0, jobs=1): err= 0: pid=1351252: Sun Nov 17 14:18:57 2024 00:09:08.555 read: IOPS=1922, BW=7688KiB/s (7873kB/s)(7696KiB/1001msec) 00:09:08.555 slat (nsec): min=3179, max=30836, avg=7496.50, stdev=1508.44 00:09:08.555 clat (usec): min=171, max=41008, avg=323.79, stdev=1604.59 00:09:08.555 lat (usec): min=179, max=41015, avg=331.29, stdev=1604.60 00:09:08.555 clat percentiles (usec): 00:09:08.555 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 219], 20.00th=[ 231], 00:09:08.555 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:09:08.555 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 367], 95.00th=[ 371], 00:09:08.555 | 99.00th=[ 375], 99.50th=[ 379], 99.90th=[41157], 99.95th=[41157], 00:09:08.555 | 99.99th=[41157] 00:09:08.555 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:08.555 slat (nsec): min=9468, max=35532, avg=10648.55, stdev=1300.86 00:09:08.555 clat (usec): min=113, max=676, avg=162.41, stdev=37.98 00:09:08.555 lat (usec): min=124, max=686, avg=173.06, stdev=38.16 00:09:08.555 clat percentiles (usec): 00:09:08.555 | 1.00th=[ 124], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 135], 00:09:08.555 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 149], 60.00th=[ 161], 00:09:08.555 | 70.00th=[ 172], 80.00th=[ 186], 90.00th=[ 235], 95.00th=[ 243], 00:09:08.555 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 441], 99.95th=[ 529], 00:09:08.555 | 99.99th=[ 676] 00:09:08.555 bw ( KiB/s): min= 9392, max= 9392, per=54.66%, avg=9392.00, stdev= 0.00, samples=1 00:09:08.555 iops : min= 2348, max= 2348, avg=2348.00, stdev= 0.00, samples=1 00:09:08.555 lat (usec) : 250=75.83%, 500=24.04%, 750=0.05% 00:09:08.555 lat (msec) : 50=0.08% 00:09:08.555 cpu : usr=1.70%, sys=4.00%, ctx=3974, majf=0, minf=1 00:09:08.555 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.555 issued rwts: total=1924,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.555 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.555 job3: (groupid=0, jobs=1): err= 0: pid=1351257: Sun Nov 17 14:18:57 2024 00:09:08.555 read: IOPS=22, BW=88.2KiB/s (90.3kB/s)(92.0KiB/1043msec) 00:09:08.555 slat (nsec): min=9016, max=23838, avg=20748.17, stdev=4683.22 00:09:08.555 clat (usec): min=40725, max=42092, avg=41179.69, stdev=435.59 00:09:08.555 lat (usec): min=40735, max=42115, avg=41200.44, stdev=435.78 00:09:08.555 clat percentiles (usec): 00:09:08.555 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:08.555 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:08.555 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:09:08.555 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:08.555 | 99.99th=[42206] 00:09:08.555 write: IOPS=490, BW=1964KiB/s (2011kB/s)(2048KiB/1043msec); 0 zone resets 00:09:08.555 slat (nsec): min=9737, max=36870, avg=10639.30, stdev=1438.97 00:09:08.555 clat (usec): min=142, max=295, avg=171.98, stdev=15.99 00:09:08.555 lat (usec): min=152, max=307, avg=182.62, stdev=16.27 00:09:08.555 clat percentiles (usec): 00:09:08.555 | 1.00th=[ 149], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 161], 00:09:08.555 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:09:08.555 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 196], 00:09:08.555 | 99.00th=[ 215], 99.50th=[ 277], 99.90th=[ 297], 99.95th=[ 297], 00:09:08.555 | 99.99th=[ 297] 00:09:08.555 bw ( KiB/s): min= 4096, max= 4096, per=23.84%, avg=4096.00, stdev= 0.00, samples=1 00:09:08.555 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:08.555 lat (usec) : 250=94.95%, 500=0.75% 00:09:08.555 lat (msec) : 50=4.30% 00:09:08.555 cpu : usr=0.19%, sys=0.58%, ctx=537, majf=0, minf=1 00:09:08.555 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.555 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.555 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.555 00:09:08.555 Run status group 0 (all jobs): 00:09:08.555 READ: bw=11.2MiB/s (11.8MB/s), 87.6KiB/s-7688KiB/s (89.8kB/s-7873kB/s), io=11.7MiB (12.3MB), run=1001-1043msec 00:09:08.555 WRITE: bw=16.8MiB/s (17.6MB/s), 1964KiB/s-8184KiB/s (2011kB/s-8380kB/s), io=17.5MiB (18.3MB), run=1001-1043msec 00:09:08.555 00:09:08.555 Disk stats (read/write): 00:09:08.555 nvme0n1: ios=50/512, merge=0/0, ticks=874/92, in_queue=966, util=98.60% 00:09:08.555 nvme0n2: ios=947/1024, merge=0/0, ticks=1108/179, in_queue=1287, util=97.46% 00:09:08.555 nvme0n3: ios=1570/1746, merge=0/0, ticks=1077/280, in_queue=1357, util=99.48% 00:09:08.555 nvme0n4: ios=68/512, merge=0/0, ticks=1460/86, in_queue=1546, util=98.43% 00:09:08.555 14:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:08.555 [global] 00:09:08.555 thread=1 00:09:08.555 invalidate=1 00:09:08.555 rw=write 00:09:08.555 time_based=1 00:09:08.555 runtime=1 00:09:08.555 ioengine=libaio 00:09:08.555 direct=1 00:09:08.555 bs=4096 00:09:08.555 iodepth=128 00:09:08.555 norandommap=0 00:09:08.555 numjobs=1 00:09:08.555 00:09:08.555 verify_dump=1 00:09:08.555 verify_backlog=512 00:09:08.555 verify_state_save=0 00:09:08.556 do_verify=1 00:09:08.556 verify=crc32c-intel 00:09:08.556 [job0] 00:09:08.556 filename=/dev/nvme0n1 00:09:08.556 [job1] 00:09:08.556 filename=/dev/nvme0n2 00:09:08.556 [job2] 00:09:08.556 filename=/dev/nvme0n3 00:09:08.556 [job3] 00:09:08.556 filename=/dev/nvme0n4 00:09:08.556 Could not set queue depth (nvme0n1) 00:09:08.556 Could not set queue depth (nvme0n2) 00:09:08.556 Could not set queue depth (nvme0n3) 00:09:08.556 Could not set queue depth (nvme0n4) 00:09:08.556 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:08.556 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:08.556 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:08.556 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:08.556 fio-3.35 00:09:08.556 Starting 4 threads 00:09:09.933 00:09:09.933 job0: (groupid=0, jobs=1): err= 0: pid=1351688: Sun Nov 17 14:18:58 2024 00:09:09.933 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:09:09.933 slat (nsec): min=1110, max=29218k, avg=134472.32, stdev=1161129.86 00:09:09.933 clat (usec): min=3488, max=55720, avg=16029.91, stdev=10499.92 00:09:09.933 lat (usec): min=3707, max=55730, avg=16164.38, stdev=10590.70 00:09:09.933 clat percentiles (usec): 00:09:09.933 | 1.00th=[ 3752], 5.00th=[ 5932], 10.00th=[ 7767], 20.00th=[ 8717], 00:09:09.933 | 30.00th=[ 9765], 40.00th=[11207], 50.00th=[11600], 60.00th=[11863], 00:09:09.933 | 70.00th=[16909], 80.00th=[23462], 90.00th=[33424], 95.00th=[36439], 00:09:09.933 | 99.00th=[51119], 99.50th=[52691], 99.90th=[55837], 99.95th=[55837], 00:09:09.933 | 99.99th=[55837] 00:09:09.933 write: IOPS=3685, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1009msec); 0 zone resets 00:09:09.933 slat (nsec): min=1824, max=19921k, avg=126292.82, stdev=812720.61 00:09:09.933 clat (usec): min=1239, max=55680, avg=18989.41, stdev=10921.66 00:09:09.933 lat (usec): min=1250, max=57216, avg=19115.70, stdev=10988.80 00:09:09.933 clat percentiles (usec): 00:09:09.933 | 1.00th=[ 3425], 5.00th=[ 6652], 10.00th=[ 7832], 20.00th=[10159], 00:09:09.933 | 30.00th=[10814], 40.00th=[15270], 50.00th=[16581], 60.00th=[17695], 00:09:09.933 | 70.00th=[23725], 80.00th=[26870], 90.00th=[36963], 95.00th=[41157], 00:09:09.933 | 99.00th=[50594], 99.50th=[52691], 99.90th=[53216], 99.95th=[55837], 00:09:09.933 | 99.99th=[55837] 00:09:09.933 bw ( KiB/s): min=12408, max=16384, per=23.76%, avg=14396.00, stdev=2811.46, samples=2 00:09:09.933 iops : min= 3102, max= 4096, avg=3599.00, stdev=702.86, samples=2 00:09:09.933 lat (msec) : 2=0.16%, 4=1.97%, 10=23.48%, 20=43.43%, 50=29.63% 00:09:09.933 lat (msec) : 100=1.31% 00:09:09.933 cpu : usr=2.98%, sys=2.98%, ctx=411, majf=0, minf=2 00:09:09.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:09:09.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:09.933 issued rwts: total=3584,3719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:09.933 job1: (groupid=0, jobs=1): err= 0: pid=1351703: Sun Nov 17 14:18:58 2024 00:09:09.933 read: IOPS=3915, BW=15.3MiB/s (16.0MB/s)(15.5MiB/1012msec) 00:09:09.933 slat (nsec): min=1361, max=13028k, avg=92947.24, stdev=669645.03 00:09:09.933 clat (usec): min=2543, max=37508, avg=11207.68, stdev=4657.48 00:09:09.933 lat (usec): min=2898, max=37548, avg=11300.63, stdev=4713.66 00:09:09.933 clat percentiles (usec): 00:09:09.933 | 1.00th=[ 4359], 5.00th=[ 6849], 10.00th=[ 7635], 20.00th=[ 8160], 00:09:09.933 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10290], 00:09:09.933 | 70.00th=[11207], 80.00th=[12911], 90.00th=[19006], 95.00th=[22938], 00:09:09.934 | 99.00th=[26346], 99.50th=[26346], 99.90th=[26346], 99.95th=[35390], 00:09:09.934 | 99.99th=[37487] 00:09:09.934 write: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec); 0 zone resets 00:09:09.934 slat (usec): min=2, max=20559, avg=150.55, stdev=908.58 00:09:09.934 clat (usec): min=1775, max=104868, avg=20418.56, stdev=21421.30 00:09:09.934 lat (usec): min=1783, max=104880, avg=20569.11, stdev=21554.01 00:09:09.934 clat percentiles (msec): 00:09:09.934 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 8], 20.00th=[ 9], 00:09:09.934 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 15], 60.00th=[ 17], 00:09:09.934 | 70.00th=[ 18], 80.00th=[ 24], 90.00th=[ 49], 95.00th=[ 81], 00:09:09.934 | 99.00th=[ 100], 99.50th=[ 103], 99.90th=[ 106], 99.95th=[ 106], 00:09:09.934 | 99.99th=[ 106] 00:09:09.934 bw ( KiB/s): min=10832, max=21936, per=27.04%, avg=16384.00, stdev=7851.71, samples=2 00:09:09.934 iops : min= 2708, max= 5484, avg=4096.00, stdev=1962.93, samples=2 00:09:09.934 lat (msec) : 2=0.07%, 4=1.91%, 10=49.73%, 20=30.95%, 50=12.61% 00:09:09.934 lat (msec) : 100=4.37%, 250=0.36% 00:09:09.934 cpu : usr=3.07%, sys=4.25%, ctx=512, majf=0, minf=1 00:09:09.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:09.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:09.934 issued rwts: total=3962,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.934 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:09.934 job2: (groupid=0, jobs=1): err= 0: pid=1351724: Sun Nov 17 14:18:58 2024 00:09:09.934 read: IOPS=3381, BW=13.2MiB/s (13.8MB/s)(13.3MiB/1007msec) 00:09:09.934 slat (nsec): min=1571, max=18154k, avg=144443.18, stdev=1075598.20 00:09:09.934 clat (usec): min=3892, max=52691, avg=17260.18, stdev=8209.66 00:09:09.934 lat (usec): min=4454, max=52698, avg=17404.62, stdev=8275.34 00:09:09.934 clat percentiles (usec): 00:09:09.934 | 1.00th=[ 7177], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10290], 00:09:09.934 | 30.00th=[12911], 40.00th=[13435], 50.00th=[14877], 60.00th=[16909], 00:09:09.934 | 70.00th=[19006], 80.00th=[20841], 90.00th=[28705], 95.00th=[34341], 00:09:09.934 | 99.00th=[46400], 99.50th=[49546], 99.90th=[52691], 99.95th=[52691], 00:09:09.934 | 99.99th=[52691] 00:09:09.934 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:09:09.934 slat (usec): min=2, max=23333, avg=135.49, stdev=809.29 00:09:09.934 clat (usec): min=2709, max=52675, avg=19174.14, stdev=8171.82 00:09:09.934 lat (usec): min=2733, max=52681, avg=19309.63, stdev=8238.96 00:09:09.934 clat percentiles (usec): 00:09:09.934 | 1.00th=[ 4359], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[10421], 00:09:09.934 | 30.00th=[15139], 40.00th=[17695], 50.00th=[17957], 60.00th=[19006], 00:09:09.934 | 70.00th=[23725], 80.00th=[26608], 90.00th=[29492], 95.00th=[33817], 00:09:09.934 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[52691], 00:09:09.934 | 99.99th=[52691] 00:09:09.934 bw ( KiB/s): min=12288, max=16384, per=23.66%, avg=14336.00, stdev=2896.31, samples=2 00:09:09.934 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:09:09.934 lat (msec) : 4=0.30%, 10=18.30%, 20=50.52%, 50=30.66%, 100=0.21% 00:09:09.934 cpu : usr=3.28%, sys=4.57%, ctx=385, majf=0, minf=1 00:09:09.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:09.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:09.934 issued rwts: total=3405,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.934 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:09.934 job3: (groupid=0, jobs=1): err= 0: pid=1351729: Sun Nov 17 14:18:58 2024 00:09:09.934 read: IOPS=3527, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1016msec) 00:09:09.934 slat (nsec): min=1362, max=21293k, avg=130904.81, stdev=1023343.83 00:09:09.934 clat (usec): min=5427, max=70283, avg=17044.20, stdev=10162.19 00:09:09.934 lat (usec): min=5432, max=70286, avg=17175.10, stdev=10228.28 00:09:09.934 clat percentiles (usec): 00:09:09.934 | 1.00th=[ 7504], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[10421], 00:09:09.934 | 30.00th=[10552], 40.00th=[10945], 50.00th=[13173], 60.00th=[15401], 00:09:09.934 | 70.00th=[18482], 80.00th=[21890], 90.00th=[29754], 95.00th=[42730], 00:09:09.934 | 99.00th=[59507], 99.50th=[59507], 99.90th=[59507], 99.95th=[64226], 00:09:09.934 | 99.99th=[70779] 00:09:09.934 write: IOPS=3927, BW=15.3MiB/s (16.1MB/s)(15.6MiB/1016msec); 0 zone resets 00:09:09.934 slat (usec): min=2, max=18793, avg=121.79, stdev=785.25 00:09:09.934 clat (usec): min=1674, max=46891, avg=16993.62, stdev=8260.17 00:09:09.934 lat (usec): min=1697, max=46895, avg=17115.41, stdev=8326.56 00:09:09.934 clat percentiles (usec): 00:09:09.934 | 1.00th=[ 4047], 5.00th=[ 8225], 10.00th=[ 8717], 20.00th=[ 9634], 00:09:09.934 | 30.00th=[10290], 40.00th=[13829], 50.00th=[16581], 60.00th=[17957], 00:09:09.934 | 70.00th=[18744], 80.00th=[22676], 90.00th=[26870], 95.00th=[33424], 00:09:09.934 | 99.00th=[44303], 99.50th=[44827], 99.90th=[46924], 99.95th=[46924], 00:09:09.934 | 99.99th=[46924] 00:09:09.934 bw ( KiB/s): min=12288, max=18616, per=25.50%, avg=15452.00, stdev=4474.57, samples=2 00:09:09.934 iops : min= 3072, max= 4654, avg=3863.00, stdev=1118.64, samples=2 00:09:09.934 lat (msec) : 2=0.03%, 4=0.34%, 10=18.30%, 20=58.61%, 50=21.59% 00:09:09.934 lat (msec) : 100=1.14% 00:09:09.934 cpu : usr=3.35%, sys=4.33%, ctx=319, majf=0, minf=1 00:09:09.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:09.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:09.934 issued rwts: total=3584,3990,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.934 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:09.934 00:09:09.934 Run status group 0 (all jobs): 00:09:09.934 READ: bw=55.9MiB/s (58.6MB/s), 13.2MiB/s-15.3MiB/s (13.8MB/s-16.0MB/s), io=56.8MiB (59.5MB), run=1007-1016msec 00:09:09.934 WRITE: bw=59.2MiB/s (62.0MB/s), 13.9MiB/s-15.8MiB/s (14.6MB/s-16.6MB/s), io=60.1MiB (63.0MB), run=1007-1016msec 00:09:09.934 00:09:09.934 Disk stats (read/write): 00:09:09.934 nvme0n1: ios=2812/3072, merge=0/0, ticks=35740/38988, in_queue=74728, util=86.87% 00:09:09.934 nvme0n2: ios=3624/3719, merge=0/0, ticks=38811/64497, in_queue=103308, util=97.57% 00:09:09.934 nvme0n3: ios=3093/3103, merge=0/0, ticks=53133/52792, in_queue=105925, util=97.30% 00:09:09.934 nvme0n4: ios=2986/3072, merge=0/0, ticks=46446/53111, in_queue=99557, util=90.99% 00:09:09.934 14:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:09.934 [global] 00:09:09.934 thread=1 00:09:09.934 invalidate=1 00:09:09.934 rw=randwrite 00:09:09.934 time_based=1 00:09:09.934 runtime=1 00:09:09.934 ioengine=libaio 00:09:09.934 direct=1 00:09:09.934 bs=4096 00:09:09.934 iodepth=128 00:09:09.934 norandommap=0 00:09:09.934 numjobs=1 00:09:09.934 00:09:09.934 verify_dump=1 00:09:09.934 verify_backlog=512 00:09:09.934 verify_state_save=0 00:09:09.934 do_verify=1 00:09:09.934 verify=crc32c-intel 00:09:09.934 [job0] 00:09:09.934 filename=/dev/nvme0n1 00:09:09.934 [job1] 00:09:09.934 filename=/dev/nvme0n2 00:09:09.934 [job2] 00:09:09.934 filename=/dev/nvme0n3 00:09:09.934 [job3] 00:09:09.934 filename=/dev/nvme0n4 00:09:09.934 Could not set queue depth (nvme0n1) 00:09:09.934 Could not set queue depth (nvme0n2) 00:09:09.934 Could not set queue depth (nvme0n3) 00:09:09.934 Could not set queue depth (nvme0n4) 00:09:10.194 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:10.194 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:10.194 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:10.194 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:10.194 fio-3.35 00:09:10.194 Starting 4 threads 00:09:11.572 00:09:11.572 job0: (groupid=0, jobs=1): err= 0: pid=1352161: Sun Nov 17 14:19:00 2024 00:09:11.572 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:09:11.572 slat (nsec): min=1373, max=42838k, avg=100187.04, stdev=782362.10 00:09:11.573 clat (usec): min=8033, max=53091, avg=12605.07, stdev=6866.03 00:09:11.573 lat (usec): min=8355, max=54059, avg=12705.25, stdev=6889.15 00:09:11.573 clat percentiles (usec): 00:09:11.573 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10421], 00:09:11.573 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:09:11.573 | 70.00th=[11600], 80.00th=[12125], 90.00th=[13304], 95.00th=[16057], 00:09:11.573 | 99.00th=[51119], 99.50th=[52167], 99.90th=[53216], 99.95th=[53216], 00:09:11.573 | 99.99th=[53216] 00:09:11.573 write: IOPS=4899, BW=19.1MiB/s (20.1MB/s)(19.2MiB/1002msec); 0 zone resets 00:09:11.573 slat (usec): min=2, max=42582, avg=105.17, stdev=809.07 00:09:11.573 clat (usec): min=699, max=51758, avg=13768.97, stdev=8335.07 00:09:11.573 lat (usec): min=2961, max=53515, avg=13874.14, stdev=8360.49 00:09:11.573 clat percentiles (usec): 00:09:11.573 | 1.00th=[ 5538], 5.00th=[ 8717], 10.00th=[ 9896], 20.00th=[10552], 00:09:11.573 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10814], 60.00th=[11076], 00:09:11.573 | 70.00th=[11338], 80.00th=[13173], 90.00th=[22152], 95.00th=[33424], 00:09:11.573 | 99.00th=[51643], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:09:11.573 | 99.99th=[51643] 00:09:11.573 bw ( KiB/s): min=18112, max=20136, per=27.66%, avg=19124.00, stdev=1431.18, samples=2 00:09:11.573 iops : min= 4528, max= 5034, avg=4781.00, stdev=357.80, samples=2 00:09:11.573 lat (usec) : 750=0.01% 00:09:11.573 lat (msec) : 4=0.34%, 10=12.04%, 20=79.93%, 50=5.35%, 100=2.33% 00:09:11.573 cpu : usr=2.40%, sys=4.90%, ctx=529, majf=0, minf=1 00:09:11.573 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:11.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.573 issued rwts: total=4608,4909,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.573 job1: (groupid=0, jobs=1): err= 0: pid=1352162: Sun Nov 17 14:19:00 2024 00:09:11.573 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:09:11.573 slat (nsec): min=1356, max=21397k, avg=127304.71, stdev=954664.43 00:09:11.573 clat (usec): min=4693, max=87682, avg=14900.11, stdev=10200.50 00:09:11.573 lat (usec): min=4703, max=87689, avg=15027.41, stdev=10297.37 00:09:11.573 clat percentiles (usec): 00:09:11.573 | 1.00th=[ 5866], 5.00th=[ 8979], 10.00th=[10028], 20.00th=[10683], 00:09:11.573 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863], 00:09:11.573 | 70.00th=[12649], 80.00th=[17695], 90.00th=[22676], 95.00th=[26608], 00:09:11.573 | 99.00th=[74974], 99.50th=[80217], 99.90th=[87557], 99.95th=[87557], 00:09:11.573 | 99.99th=[87557] 00:09:11.573 write: IOPS=3890, BW=15.2MiB/s (15.9MB/s)(15.3MiB/1009msec); 0 zone resets 00:09:11.573 slat (usec): min=2, max=11447, avg=131.19, stdev=780.46 00:09:11.573 clat (usec): min=1552, max=87679, avg=18984.93, stdev=17673.87 00:09:11.573 lat (usec): min=1565, max=87687, avg=19116.12, stdev=17769.29 00:09:11.573 clat percentiles (usec): 00:09:11.573 | 1.00th=[ 3916], 5.00th=[ 5932], 10.00th=[ 7242], 20.00th=[ 8848], 00:09:11.573 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10945], 60.00th=[11600], 00:09:11.573 | 70.00th=[17433], 80.00th=[21890], 90.00th=[50594], 95.00th=[61604], 00:09:11.573 | 99.00th=[84411], 99.50th=[85459], 99.90th=[87557], 99.95th=[87557], 00:09:11.573 | 99.99th=[87557] 00:09:11.573 bw ( KiB/s): min=11272, max=19120, per=21.98%, avg=15196.00, stdev=5549.37, samples=2 00:09:11.573 iops : min= 2818, max= 4780, avg=3799.00, stdev=1387.34, samples=2 00:09:11.573 lat (msec) : 2=0.13%, 4=0.48%, 10=20.32%, 20=57.59%, 50=14.89% 00:09:11.573 lat (msec) : 100=6.59% 00:09:11.573 cpu : usr=3.08%, sys=4.17%, ctx=361, majf=0, minf=2 00:09:11.573 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:11.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.573 issued rwts: total=3584,3926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.573 job2: (groupid=0, jobs=1): err= 0: pid=1352163: Sun Nov 17 14:19:00 2024 00:09:11.573 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:09:11.573 slat (nsec): min=1444, max=18966k, avg=123151.09, stdev=919823.28 00:09:11.573 clat (usec): min=3503, max=47766, avg=14869.98, stdev=6204.20 00:09:11.573 lat (usec): min=3509, max=47781, avg=14993.13, stdev=6270.96 00:09:11.573 clat percentiles (usec): 00:09:11.573 | 1.00th=[ 4883], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[11207], 00:09:11.573 | 30.00th=[11469], 40.00th=[11600], 50.00th=[12387], 60.00th=[13304], 00:09:11.573 | 70.00th=[15664], 80.00th=[19006], 90.00th=[26084], 95.00th=[27919], 00:09:11.573 | 99.00th=[31851], 99.50th=[36439], 99.90th=[38011], 99.95th=[38011], 00:09:11.573 | 99.99th=[47973] 00:09:11.573 write: IOPS=4080, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:09:11.573 slat (usec): min=2, max=12999, avg=114.94, stdev=628.08 00:09:11.573 clat (usec): min=2273, max=68746, avg=15914.25, stdev=12804.28 00:09:11.573 lat (usec): min=2300, max=68750, avg=16029.19, stdev=12874.74 00:09:11.573 clat percentiles (usec): 00:09:11.573 | 1.00th=[ 3523], 5.00th=[ 5538], 10.00th=[ 7570], 20.00th=[ 9503], 00:09:11.573 | 30.00th=[10290], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:09:11.573 | 70.00th=[11994], 80.00th=[20317], 90.00th=[34866], 95.00th=[48497], 00:09:11.573 | 99.00th=[64750], 99.50th=[67634], 99.90th=[68682], 99.95th=[68682], 00:09:11.573 | 99.99th=[68682] 00:09:11.573 bw ( KiB/s): min=15568, max=17200, per=23.70%, avg=16384.00, stdev=1154.00, samples=2 00:09:11.573 iops : min= 3892, max= 4300, avg=4096.00, stdev=288.50, samples=2 00:09:11.573 lat (msec) : 4=1.43%, 10=19.54%, 20=60.41%, 50=16.50%, 100=2.12% 00:09:11.573 cpu : usr=3.09%, sys=4.49%, ctx=518, majf=0, minf=1 00:09:11.573 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:11.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.573 issued rwts: total=4096,4097,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.573 job3: (groupid=0, jobs=1): err= 0: pid=1352164: Sun Nov 17 14:19:00 2024 00:09:11.573 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:09:11.573 slat (nsec): min=1169, max=14321k, avg=108498.21, stdev=807146.18 00:09:11.573 clat (usec): min=4719, max=46578, avg=15404.43, stdev=6191.40 00:09:11.573 lat (usec): min=4724, max=46585, avg=15512.93, stdev=6234.45 00:09:11.573 clat percentiles (usec): 00:09:11.573 | 1.00th=[ 7373], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[11338], 00:09:11.573 | 30.00th=[12125], 40.00th=[12649], 50.00th=[13042], 60.00th=[14746], 00:09:11.573 | 70.00th=[16909], 80.00th=[19268], 90.00th=[21627], 95.00th=[26084], 00:09:11.573 | 99.00th=[40633], 99.50th=[44827], 99.90th=[46400], 99.95th=[46400], 00:09:11.573 | 99.99th=[46400] 00:09:11.573 write: IOPS=4490, BW=17.5MiB/s (18.4MB/s)(17.6MiB/1004msec); 0 zone resets 00:09:11.573 slat (nsec): min=1924, max=25498k, avg=86119.26, stdev=748323.35 00:09:11.573 clat (usec): min=314, max=71182, avg=14239.85, stdev=10848.30 00:09:11.573 lat (usec): min=1191, max=71191, avg=14325.96, stdev=10892.48 00:09:11.573 clat percentiles (usec): 00:09:11.573 | 1.00th=[ 3720], 5.00th=[ 5342], 10.00th=[ 6390], 20.00th=[ 8455], 00:09:11.573 | 30.00th=[ 9503], 40.00th=[10683], 50.00th=[11469], 60.00th=[11994], 00:09:11.573 | 70.00th=[13304], 80.00th=[16188], 90.00th=[20841], 95.00th=[42730], 00:09:11.573 | 99.00th=[57934], 99.50th=[59507], 99.90th=[70779], 99.95th=[70779], 00:09:11.573 | 99.99th=[70779] 00:09:11.573 bw ( KiB/s): min=16384, max=18656, per=25.34%, avg=17520.00, stdev=1606.55, samples=2 00:09:11.573 iops : min= 4096, max= 4664, avg=4380.00, stdev=401.64, samples=2 00:09:11.573 lat (usec) : 500=0.01% 00:09:11.573 lat (msec) : 2=0.02%, 4=0.79%, 10=22.00%, 20=63.04%, 50=12.16% 00:09:11.573 lat (msec) : 100=1.98% 00:09:11.573 cpu : usr=3.39%, sys=4.19%, ctx=329, majf=0, minf=1 00:09:11.573 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:11.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.573 issued rwts: total=4096,4508,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.573 00:09:11.573 Run status group 0 (all jobs): 00:09:11.573 READ: bw=63.4MiB/s (66.5MB/s), 13.9MiB/s-18.0MiB/s (14.5MB/s-18.8MB/s), io=64.0MiB (67.1MB), run=1002-1009msec 00:09:11.573 WRITE: bw=67.5MiB/s (70.8MB/s), 15.2MiB/s-19.1MiB/s (15.9MB/s-20.1MB/s), io=68.1MiB (71.4MB), run=1002-1009msec 00:09:11.573 00:09:11.573 Disk stats (read/write): 00:09:11.573 nvme0n1: ios=3634/3713, merge=0/0, ticks=12748/11478, in_queue=24226, util=88.98% 00:09:11.573 nvme0n2: ios=3122/3255, merge=0/0, ticks=44885/51080, in_queue=95965, util=86.01% 00:09:11.573 nvme0n3: ios=3320/3584, merge=0/0, ticks=49068/48858, in_queue=97926, util=97.39% 00:09:11.573 nvme0n4: ios=3112/3343, merge=0/0, ticks=35284/44506, in_queue=79790, util=95.69% 00:09:11.573 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:11.573 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1352311 00:09:11.573 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:11.574 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:11.574 [global] 00:09:11.574 thread=1 00:09:11.574 invalidate=1 00:09:11.574 rw=read 00:09:11.574 time_based=1 00:09:11.574 runtime=10 00:09:11.574 ioengine=libaio 00:09:11.574 direct=1 00:09:11.574 bs=4096 00:09:11.574 iodepth=1 00:09:11.574 norandommap=1 00:09:11.574 numjobs=1 00:09:11.574 00:09:11.574 [job0] 00:09:11.574 filename=/dev/nvme0n1 00:09:11.574 [job1] 00:09:11.574 filename=/dev/nvme0n2 00:09:11.574 [job2] 00:09:11.574 filename=/dev/nvme0n3 00:09:11.574 [job3] 00:09:11.574 filename=/dev/nvme0n4 00:09:11.574 Could not set queue depth (nvme0n1) 00:09:11.574 Could not set queue depth (nvme0n2) 00:09:11.574 Could not set queue depth (nvme0n3) 00:09:11.574 Could not set queue depth (nvme0n4) 00:09:11.833 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:11.833 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:11.833 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:11.833 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:11.833 fio-3.35 00:09:11.833 Starting 4 threads 00:09:14.371 14:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:14.630 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=43134976, buflen=4096 00:09:14.630 fio: pid=1352535, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:14.630 14:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:14.891 14:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:14.891 14:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:14.891 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=2699264, buflen=4096 00:09:14.891 fio: pid=1352534, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:15.150 14:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:15.150 14:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:15.150 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=35201024, buflen=4096 00:09:15.150 fio: pid=1352532, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:15.411 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=51445760, buflen=4096 00:09:15.411 fio: pid=1352533, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:15.411 14:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:15.411 14:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:15.411 00:09:15.411 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1352532: Sun Nov 17 14:19:04 2024 00:09:15.411 read: IOPS=2741, BW=10.7MiB/s (11.2MB/s)(33.6MiB/3135msec) 00:09:15.411 slat (usec): min=6, max=9866, avg= 9.10, stdev=106.36 00:09:15.411 clat (usec): min=169, max=42021, avg=351.31, stdev=2291.59 00:09:15.411 lat (usec): min=177, max=51053, avg=360.41, stdev=2315.11 00:09:15.411 clat percentiles (usec): 00:09:15.411 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:09:15.411 | 30.00th=[ 217], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:09:15.411 | 70.00th=[ 229], 80.00th=[ 233], 90.00th=[ 241], 95.00th=[ 247], 00:09:15.411 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[41157], 99.95th=[41681], 00:09:15.411 | 99.99th=[42206] 00:09:15.411 bw ( KiB/s): min= 93, max=17480, per=29.42%, avg=11454.17, stdev=8436.20, samples=6 00:09:15.411 iops : min= 23, max= 4370, avg=2863.50, stdev=2109.12, samples=6 00:09:15.411 lat (usec) : 250=96.84%, 500=2.84% 00:09:15.411 lat (msec) : 50=0.31% 00:09:15.411 cpu : usr=1.50%, sys=4.31%, ctx=8597, majf=0, minf=1 00:09:15.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.411 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.411 issued rwts: total=8595,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.411 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1352533: Sun Nov 17 14:19:04 2024 00:09:15.411 read: IOPS=3780, BW=14.8MiB/s (15.5MB/s)(49.1MiB/3323msec) 00:09:15.411 slat (usec): min=6, max=21828, avg=16.03, stdev=334.11 00:09:15.411 clat (usec): min=164, max=9922, avg=244.70, stdev=96.34 00:09:15.411 lat (usec): min=183, max=22212, avg=260.74, stdev=350.03 00:09:15.411 clat percentiles (usec): 00:09:15.411 | 1.00th=[ 184], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 215], 00:09:15.411 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 243], 00:09:15.411 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 297], 95.00th=[ 326], 00:09:15.411 | 99.00th=[ 416], 99.50th=[ 433], 99.90th=[ 457], 99.95th=[ 482], 00:09:15.411 | 99.99th=[ 709] 00:09:15.411 bw ( KiB/s): min=14056, max=16680, per=39.71%, avg=15461.00, stdev=999.28, samples=6 00:09:15.411 iops : min= 3514, max= 4170, avg=3865.17, stdev=249.73, samples=6 00:09:15.411 lat (usec) : 250=68.64%, 500=31.31%, 750=0.03% 00:09:15.411 lat (msec) : 10=0.01% 00:09:15.411 cpu : usr=1.57%, sys=7.19%, ctx=12567, majf=0, minf=2 00:09:15.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.411 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.411 issued rwts: total=12561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.411 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1352534: Sun Nov 17 14:19:04 2024 00:09:15.411 read: IOPS=222, BW=890KiB/s (911kB/s)(2636KiB/2962msec) 00:09:15.411 slat (usec): min=6, max=11873, avg=27.23, stdev=461.85 00:09:15.411 clat (usec): min=202, max=42040, avg=4426.36, stdev=12343.44 00:09:15.411 lat (usec): min=209, max=52923, avg=4453.59, stdev=12409.35 00:09:15.411 clat percentiles (usec): 00:09:15.411 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 247], 00:09:15.411 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 281], 00:09:15.411 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[40633], 95.00th=[41157], 00:09:15.411 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:15.411 | 99.99th=[42206] 00:09:15.411 bw ( KiB/s): min= 104, max= 1792, per=2.66%, avg=1035.20, stdev=860.52, samples=5 00:09:15.411 iops : min= 26, max= 448, avg=258.80, stdev=215.13, samples=5 00:09:15.411 lat (usec) : 250=25.45%, 500=63.64%, 750=0.61% 00:09:15.411 lat (msec) : 50=10.15% 00:09:15.411 cpu : usr=0.10%, sys=0.27%, ctx=661, majf=0, minf=2 00:09:15.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.411 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.411 issued rwts: total=660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.411 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1352535: Sun Nov 17 14:19:04 2024 00:09:15.411 read: IOPS=3877, BW=15.1MiB/s (15.9MB/s)(41.1MiB/2716msec) 00:09:15.411 slat (nsec): min=3239, max=33779, avg=7595.30, stdev=1201.41 00:09:15.411 clat (usec): min=168, max=40927, avg=247.06, stdev=399.15 00:09:15.411 lat (usec): min=175, max=40935, avg=254.66, stdev=399.17 00:09:15.411 clat percentiles (usec): 00:09:15.411 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 206], 00:09:15.411 | 30.00th=[ 215], 40.00th=[ 229], 50.00th=[ 241], 60.00th=[ 249], 00:09:15.411 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 326], 00:09:15.411 | 99.00th=[ 429], 99.50th=[ 482], 99.90th=[ 515], 99.95th=[ 529], 00:09:15.411 | 99.99th=[ 685] 00:09:15.411 bw ( KiB/s): min=13888, max=16808, per=40.24%, avg=15668.80, stdev=1099.33, samples=5 00:09:15.411 iops : min= 3472, max= 4202, avg=3917.20, stdev=274.83, samples=5 00:09:15.411 lat (usec) : 250=62.08%, 500=37.67%, 750=0.24% 00:09:15.411 lat (msec) : 50=0.01% 00:09:15.411 cpu : usr=0.74%, sys=4.38%, ctx=10532, majf=0, minf=2 00:09:15.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.411 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.411 issued rwts: total=10532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.411 00:09:15.411 Run status group 0 (all jobs): 00:09:15.411 READ: bw=38.0MiB/s (39.9MB/s), 890KiB/s-15.1MiB/s (911kB/s-15.9MB/s), io=126MiB (132MB), run=2716-3323msec 00:09:15.411 00:09:15.411 Disk stats (read/write): 00:09:15.411 nvme0n1: ios=8593/0, merge=0/0, ticks=2870/0, in_queue=2870, util=95.47% 00:09:15.411 nvme0n2: ios=12009/0, merge=0/0, ticks=2819/0, in_queue=2819, util=95.48% 00:09:15.411 nvme0n3: ios=657/0, merge=0/0, ticks=2834/0, in_queue=2834, util=96.18% 00:09:15.411 nvme0n4: ios=10190/0, merge=0/0, ticks=2447/0, in_queue=2447, util=96.48% 00:09:15.411 14:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:15.411 14:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:15.671 14:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:15.671 14:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:15.930 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:15.930 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:16.190 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:16.190 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:16.449 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:16.449 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1352311 00:09:16.449 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:16.449 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:16.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.449 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:16.449 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:16.449 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:16.449 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.449 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:16.449 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.449 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:16.449 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:16.449 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:16.449 nvmf hotplug test: fio failed as expected 00:09:16.449 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:16.708 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:16.708 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:16.708 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:16.708 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:16.708 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:16.708 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:16.708 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:16.708 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:16.708 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:16.708 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:16.709 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:16.709 rmmod nvme_tcp 00:09:16.709 rmmod nvme_fabrics 00:09:16.709 rmmod nvme_keyring 00:09:16.709 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:16.709 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:16.709 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:16.709 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1349458 ']' 00:09:16.709 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1349458 00:09:16.709 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1349458 ']' 00:09:16.709 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1349458 00:09:16.709 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:16.709 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.709 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1349458 00:09:16.968 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.968 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.968 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1349458' 00:09:16.968 killing process with pid 1349458 00:09:16.968 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1349458 00:09:16.968 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1349458 00:09:16.968 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:16.968 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:16.968 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:16.968 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:16.968 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:16.968 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:16.968 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:16.968 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:16.968 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:16.968 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.968 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.968 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.508 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:19.508 00:09:19.508 real 0m27.000s 00:09:19.508 user 1m46.729s 00:09:19.508 sys 0m8.847s 00:09:19.508 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.508 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:19.508 ************************************ 00:09:19.508 END TEST nvmf_fio_target 00:09:19.508 ************************************ 00:09:19.508 14:19:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:19.508 14:19:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:19.508 14:19:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.508 14:19:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.508 ************************************ 00:09:19.508 START TEST nvmf_bdevio 00:09:19.508 ************************************ 00:09:19.508 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:19.508 * Looking for test storage... 00:09:19.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.508 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:19.508 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:19.508 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:19.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.509 --rc genhtml_branch_coverage=1 00:09:19.509 --rc genhtml_function_coverage=1 00:09:19.509 --rc genhtml_legend=1 00:09:19.509 --rc geninfo_all_blocks=1 00:09:19.509 --rc geninfo_unexecuted_blocks=1 00:09:19.509 00:09:19.509 ' 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:19.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.509 --rc genhtml_branch_coverage=1 00:09:19.509 --rc genhtml_function_coverage=1 00:09:19.509 --rc genhtml_legend=1 00:09:19.509 --rc geninfo_all_blocks=1 00:09:19.509 --rc geninfo_unexecuted_blocks=1 00:09:19.509 00:09:19.509 ' 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:19.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.509 --rc genhtml_branch_coverage=1 00:09:19.509 --rc genhtml_function_coverage=1 00:09:19.509 --rc genhtml_legend=1 00:09:19.509 --rc geninfo_all_blocks=1 00:09:19.509 --rc geninfo_unexecuted_blocks=1 00:09:19.509 00:09:19.509 ' 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:19.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.509 --rc genhtml_branch_coverage=1 00:09:19.509 --rc genhtml_function_coverage=1 00:09:19.509 --rc genhtml_legend=1 00:09:19.509 --rc geninfo_all_blocks=1 00:09:19.509 --rc geninfo_unexecuted_blocks=1 00:09:19.509 00:09:19.509 ' 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:19.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:19.509 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.510 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.510 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.510 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:19.510 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:19.510 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:19.510 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:26.087 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:26.087 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:26.087 Found net devices under 0000:86:00.0: cvl_0_0 00:09:26.087 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:26.088 Found net devices under 0000:86:00.1: cvl_0_1 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:26.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:09:26.088 00:09:26.088 --- 10.0.0.2 ping statistics --- 00:09:26.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.088 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:26.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:09:26.088 00:09:26.088 --- 10.0.0.1 ping statistics --- 00:09:26.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.088 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1356800 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1356800 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1356800 ']' 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.088 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.088 [2024-11-17 14:19:14.502529] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:09:26.088 [2024-11-17 14:19:14.502581] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.088 [2024-11-17 14:19:14.587420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:26.088 [2024-11-17 14:19:14.629324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.088 [2024-11-17 14:19:14.629366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.088 [2024-11-17 14:19:14.629373] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.088 [2024-11-17 14:19:14.629379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.088 [2024-11-17 14:19:14.629384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.088 [2024-11-17 14:19:14.631061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:26.088 [2024-11-17 14:19:14.631170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:26.088 [2024-11-17 14:19:14.631187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:26.088 [2024-11-17 14:19:14.631195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.348 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.348 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:26.348 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:26.348 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:26.348 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.348 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.348 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:26.348 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.348 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.348 [2024-11-17 14:19:15.397990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.348 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.348 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:26.348 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.348 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.348 Malloc0 00:09:26.348 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.348 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:26.348 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.348 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.348 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.349 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:26.349 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.349 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.349 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.349 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.349 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.349 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.349 [2024-11-17 14:19:15.461892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.349 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.349 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:26.349 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:26.349 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:26.349 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:26.349 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:26.349 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:26.349 { 00:09:26.349 "params": { 00:09:26.349 "name": "Nvme$subsystem", 00:09:26.349 "trtype": "$TEST_TRANSPORT", 00:09:26.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:26.349 "adrfam": "ipv4", 00:09:26.349 "trsvcid": "$NVMF_PORT", 00:09:26.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:26.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:26.349 "hdgst": ${hdgst:-false}, 00:09:26.349 "ddgst": ${ddgst:-false} 00:09:26.349 }, 00:09:26.349 "method": "bdev_nvme_attach_controller" 00:09:26.349 } 00:09:26.349 EOF 00:09:26.349 )") 00:09:26.349 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:26.349 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:26.349 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:26.349 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:26.349 "params": { 00:09:26.349 "name": "Nvme1", 00:09:26.349 "trtype": "tcp", 00:09:26.349 "traddr": "10.0.0.2", 00:09:26.349 "adrfam": "ipv4", 00:09:26.349 "trsvcid": "4420", 00:09:26.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:26.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:26.349 "hdgst": false, 00:09:26.349 "ddgst": false 00:09:26.349 }, 00:09:26.349 "method": "bdev_nvme_attach_controller" 00:09:26.349 }' 00:09:26.349 [2024-11-17 14:19:15.512830] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:09:26.349 [2024-11-17 14:19:15.512874] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1357049 ] 00:09:26.609 [2024-11-17 14:19:15.589152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:26.609 [2024-11-17 14:19:15.633234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.609 [2024-11-17 14:19:15.633265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.609 [2024-11-17 14:19:15.633265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.609 I/O targets: 00:09:26.609 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:26.609 00:09:26.609 00:09:26.609 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.609 http://cunit.sourceforge.net/ 00:09:26.609 00:09:26.609 00:09:26.609 Suite: bdevio tests on: Nvme1n1 00:09:26.868 Test: blockdev write read block ...passed 00:09:26.868 Test: blockdev write zeroes read block ...passed 00:09:26.868 Test: blockdev write zeroes read no split ...passed 00:09:26.868 Test: blockdev write zeroes read split ...passed 00:09:26.868 Test: blockdev write zeroes read split partial ...passed 00:09:26.868 Test: blockdev reset ...[2024-11-17 14:19:15.948457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:26.868 [2024-11-17 14:19:15.948520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a1340 (9): Bad file descriptor 00:09:26.868 [2024-11-17 14:19:16.089879] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:26.868 passed 00:09:27.128 Test: blockdev write read 8 blocks ...passed 00:09:27.128 Test: blockdev write read size > 128k ...passed 00:09:27.128 Test: blockdev write read invalid size ...passed 00:09:27.128 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:27.128 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:27.128 Test: blockdev write read max offset ...passed 00:09:27.128 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:27.128 Test: blockdev writev readv 8 blocks ...passed 00:09:27.128 Test: blockdev writev readv 30 x 1block ...passed 00:09:27.128 Test: blockdev writev readv block ...passed 00:09:27.128 Test: blockdev writev readv size > 128k ...passed 00:09:27.128 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:27.128 Test: blockdev comparev and writev ...[2024-11-17 14:19:16.300133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.128 [2024-11-17 14:19:16.300163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:27.128 [2024-11-17 14:19:16.300177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.128 [2024-11-17 14:19:16.300186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:27.128 [2024-11-17 14:19:16.300428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.128 [2024-11-17 14:19:16.300440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:27.128 [2024-11-17 14:19:16.300451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.128 [2024-11-17 14:19:16.300459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:27.128 [2024-11-17 14:19:16.300695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.128 [2024-11-17 14:19:16.300707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:27.128 [2024-11-17 14:19:16.300719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.128 [2024-11-17 14:19:16.300727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:27.128 [2024-11-17 14:19:16.300968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.128 [2024-11-17 14:19:16.300979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:27.128 [2024-11-17 14:19:16.300990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.128 [2024-11-17 14:19:16.300997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:27.128 passed 00:09:27.388 Test: blockdev nvme passthru rw ...passed 00:09:27.388 Test: blockdev nvme passthru vendor specific ...[2024-11-17 14:19:16.382704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:27.388 [2024-11-17 14:19:16.382722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:27.388 [2024-11-17 14:19:16.382836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:27.388 [2024-11-17 14:19:16.382846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:27.388 [2024-11-17 14:19:16.382950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:27.388 [2024-11-17 14:19:16.382960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:27.388 [2024-11-17 14:19:16.383064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:27.388 [2024-11-17 14:19:16.383074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:27.388 passed 00:09:27.388 Test: blockdev nvme admin passthru ...passed 00:09:27.388 Test: blockdev copy ...passed 00:09:27.388 00:09:27.388 Run Summary: Type Total Ran Passed Failed Inactive 00:09:27.388 suites 1 1 n/a 0 0 00:09:27.388 tests 23 23 23 0 0 00:09:27.388 asserts 152 152 152 0 n/a 00:09:27.388 00:09:27.388 Elapsed time = 1.306 seconds 00:09:27.388 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:27.388 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.388 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:27.388 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.388 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:27.388 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:27.388 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:27.388 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:27.388 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:27.388 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:27.388 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:27.388 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:27.388 rmmod nvme_tcp 00:09:27.647 rmmod nvme_fabrics 00:09:27.647 rmmod nvme_keyring 00:09:27.648 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:27.648 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:27.648 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:27.648 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1356800 ']' 00:09:27.648 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1356800 00:09:27.648 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1356800 ']' 00:09:27.648 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1356800 00:09:27.648 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:27.648 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.648 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1356800 00:09:27.648 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:27.648 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:27.648 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1356800' 00:09:27.648 killing process with pid 1356800 00:09:27.648 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1356800 00:09:27.648 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1356800 00:09:27.908 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:27.908 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:27.908 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:27.908 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:27.908 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:27.908 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:27.908 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:27.908 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:27.908 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:27.908 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.908 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.908 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.814 14:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:29.814 00:09:29.814 real 0m10.716s 00:09:29.814 user 0m12.971s 00:09:29.814 sys 0m5.143s 00:09:29.814 14:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.814 14:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:29.814 ************************************ 00:09:29.814 END TEST nvmf_bdevio 00:09:29.814 ************************************ 00:09:29.814 14:19:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:29.814 00:09:29.814 real 4m35.925s 00:09:29.814 user 10m20.455s 00:09:29.814 sys 1m37.687s 00:09:29.815 14:19:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.815 14:19:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.815 ************************************ 00:09:29.815 END TEST nvmf_target_core 00:09:29.815 ************************************ 00:09:30.074 14:19:19 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:30.074 14:19:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:30.074 14:19:19 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.074 14:19:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:30.074 ************************************ 00:09:30.074 START TEST nvmf_target_extra 00:09:30.074 ************************************ 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:30.074 * Looking for test storage... 00:09:30.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:30.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.074 --rc genhtml_branch_coverage=1 00:09:30.074 --rc genhtml_function_coverage=1 00:09:30.074 --rc genhtml_legend=1 00:09:30.074 --rc geninfo_all_blocks=1 00:09:30.074 --rc geninfo_unexecuted_blocks=1 00:09:30.074 00:09:30.074 ' 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:30.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.074 --rc genhtml_branch_coverage=1 00:09:30.074 --rc genhtml_function_coverage=1 00:09:30.074 --rc genhtml_legend=1 00:09:30.074 --rc geninfo_all_blocks=1 00:09:30.074 --rc geninfo_unexecuted_blocks=1 00:09:30.074 00:09:30.074 ' 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:30.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.074 --rc genhtml_branch_coverage=1 00:09:30.074 --rc genhtml_function_coverage=1 00:09:30.074 --rc genhtml_legend=1 00:09:30.074 --rc geninfo_all_blocks=1 00:09:30.074 --rc geninfo_unexecuted_blocks=1 00:09:30.074 00:09:30.074 ' 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:30.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.074 --rc genhtml_branch_coverage=1 00:09:30.074 --rc genhtml_function_coverage=1 00:09:30.074 --rc genhtml_legend=1 00:09:30.074 --rc geninfo_all_blocks=1 00:09:30.074 --rc geninfo_unexecuted_blocks=1 00:09:30.074 00:09:30.074 ' 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.074 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:30.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:30.075 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:30.335 ************************************ 00:09:30.335 START TEST nvmf_example 00:09:30.335 ************************************ 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:30.335 * Looking for test storage... 00:09:30.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:30.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.335 --rc genhtml_branch_coverage=1 00:09:30.335 --rc genhtml_function_coverage=1 00:09:30.335 --rc genhtml_legend=1 00:09:30.335 --rc geninfo_all_blocks=1 00:09:30.335 --rc geninfo_unexecuted_blocks=1 00:09:30.335 00:09:30.335 ' 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:30.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.335 --rc genhtml_branch_coverage=1 00:09:30.335 --rc genhtml_function_coverage=1 00:09:30.335 --rc genhtml_legend=1 00:09:30.335 --rc geninfo_all_blocks=1 00:09:30.335 --rc geninfo_unexecuted_blocks=1 00:09:30.335 00:09:30.335 ' 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:30.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.335 --rc genhtml_branch_coverage=1 00:09:30.335 --rc genhtml_function_coverage=1 00:09:30.335 --rc genhtml_legend=1 00:09:30.335 --rc geninfo_all_blocks=1 00:09:30.335 --rc geninfo_unexecuted_blocks=1 00:09:30.335 00:09:30.335 ' 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:30.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.335 --rc genhtml_branch_coverage=1 00:09:30.335 --rc genhtml_function_coverage=1 00:09:30.335 --rc genhtml_legend=1 00:09:30.335 --rc geninfo_all_blocks=1 00:09:30.335 --rc geninfo_unexecuted_blocks=1 00:09:30.335 00:09:30.335 ' 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:30.335 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:30.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:30.336 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:36.908 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:36.909 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:36.909 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:36.909 Found net devices under 0000:86:00.0: cvl_0_0 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:36.909 Found net devices under 0000:86:00.1: cvl_0_1 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:36.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:09:36.909 00:09:36.909 --- 10.0.0.2 ping statistics --- 00:09:36.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.909 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:36.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:09:36.909 00:09:36.909 --- 10.0.0.1 ping statistics --- 00:09:36.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.909 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1360869 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1360869 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1360869 ']' 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.909 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:37.478 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:47.461 Initializing NVMe Controllers 00:09:47.461 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:47.462 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:47.462 Initialization complete. Launching workers. 00:09:47.462 ======================================================== 00:09:47.462 Latency(us) 00:09:47.462 Device Information : IOPS MiB/s Average min max 00:09:47.462 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17856.40 69.75 3585.02 703.69 16258.41 00:09:47.462 ======================================================== 00:09:47.462 Total : 17856.40 69.75 3585.02 703.69 16258.41 00:09:47.462 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:47.721 rmmod nvme_tcp 00:09:47.721 rmmod nvme_fabrics 00:09:47.721 rmmod nvme_keyring 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1360869 ']' 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1360869 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1360869 ']' 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1360869 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1360869 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1360869' 00:09:47.721 killing process with pid 1360869 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1360869 00:09:47.721 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1360869 00:09:47.980 nvmf threads initialize successfully 00:09:47.980 bdev subsystem init successfully 00:09:47.980 created a nvmf target service 00:09:47.980 create targets's poll groups done 00:09:47.980 all subsystems of target started 00:09:47.980 nvmf target is running 00:09:47.980 all subsystems of target stopped 00:09:47.980 destroy targets's poll groups done 00:09:47.980 destroyed the nvmf target service 00:09:47.980 bdev subsystem finish successfully 00:09:47.980 nvmf threads destroy successfully 00:09:47.980 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:47.980 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:47.980 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:47.980 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:47.980 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:47.980 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:47.980 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:47.980 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:47.980 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:47.980 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.980 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.980 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.887 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:49.887 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:49.887 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:49.887 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:50.151 00:09:50.151 real 0m19.800s 00:09:50.151 user 0m45.790s 00:09:50.151 sys 0m6.146s 00:09:50.151 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.151 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:50.151 ************************************ 00:09:50.151 END TEST nvmf_example 00:09:50.151 ************************************ 00:09:50.151 14:19:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:50.151 14:19:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:50.151 14:19:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.151 14:19:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:50.151 ************************************ 00:09:50.151 START TEST nvmf_filesystem 00:09:50.151 ************************************ 00:09:50.151 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:50.151 * Looking for test storage... 00:09:50.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.151 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:50.151 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:50.151 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:50.151 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:50.151 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.151 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.151 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.414 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.414 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.414 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.414 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:50.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.415 --rc genhtml_branch_coverage=1 00:09:50.415 --rc genhtml_function_coverage=1 00:09:50.415 --rc genhtml_legend=1 00:09:50.415 --rc geninfo_all_blocks=1 00:09:50.415 --rc geninfo_unexecuted_blocks=1 00:09:50.415 00:09:50.415 ' 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:50.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.415 --rc genhtml_branch_coverage=1 00:09:50.415 --rc genhtml_function_coverage=1 00:09:50.415 --rc genhtml_legend=1 00:09:50.415 --rc geninfo_all_blocks=1 00:09:50.415 --rc geninfo_unexecuted_blocks=1 00:09:50.415 00:09:50.415 ' 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:50.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.415 --rc genhtml_branch_coverage=1 00:09:50.415 --rc genhtml_function_coverage=1 00:09:50.415 --rc genhtml_legend=1 00:09:50.415 --rc geninfo_all_blocks=1 00:09:50.415 --rc geninfo_unexecuted_blocks=1 00:09:50.415 00:09:50.415 ' 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:50.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.415 --rc genhtml_branch_coverage=1 00:09:50.415 --rc genhtml_function_coverage=1 00:09:50.415 --rc genhtml_legend=1 00:09:50.415 --rc geninfo_all_blocks=1 00:09:50.415 --rc geninfo_unexecuted_blocks=1 00:09:50.415 00:09:50.415 ' 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:50.415 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:50.416 #define SPDK_CONFIG_H 00:09:50.416 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:50.416 #define SPDK_CONFIG_APPS 1 00:09:50.416 #define SPDK_CONFIG_ARCH native 00:09:50.416 #undef SPDK_CONFIG_ASAN 00:09:50.416 #undef SPDK_CONFIG_AVAHI 00:09:50.416 #undef SPDK_CONFIG_CET 00:09:50.416 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:50.416 #define SPDK_CONFIG_COVERAGE 1 00:09:50.416 #define SPDK_CONFIG_CROSS_PREFIX 00:09:50.416 #undef SPDK_CONFIG_CRYPTO 00:09:50.416 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:50.416 #undef SPDK_CONFIG_CUSTOMOCF 00:09:50.416 #undef SPDK_CONFIG_DAOS 00:09:50.416 #define SPDK_CONFIG_DAOS_DIR 00:09:50.416 #define SPDK_CONFIG_DEBUG 1 00:09:50.416 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:50.416 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:50.416 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:50.416 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:50.416 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:50.416 #undef SPDK_CONFIG_DPDK_UADK 00:09:50.416 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:50.416 #define SPDK_CONFIG_EXAMPLES 1 00:09:50.416 #undef SPDK_CONFIG_FC 00:09:50.416 #define SPDK_CONFIG_FC_PATH 00:09:50.416 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:50.416 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:50.416 #define SPDK_CONFIG_FSDEV 1 00:09:50.416 #undef SPDK_CONFIG_FUSE 00:09:50.416 #undef SPDK_CONFIG_FUZZER 00:09:50.416 #define SPDK_CONFIG_FUZZER_LIB 00:09:50.416 #undef SPDK_CONFIG_GOLANG 00:09:50.416 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:50.416 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:50.416 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:50.416 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:50.416 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:50.416 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:50.416 #undef SPDK_CONFIG_HAVE_LZ4 00:09:50.416 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:50.416 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:50.416 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:50.416 #define SPDK_CONFIG_IDXD 1 00:09:50.416 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:50.416 #undef SPDK_CONFIG_IPSEC_MB 00:09:50.416 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:50.416 #define SPDK_CONFIG_ISAL 1 00:09:50.416 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:50.416 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:50.416 #define SPDK_CONFIG_LIBDIR 00:09:50.416 #undef SPDK_CONFIG_LTO 00:09:50.416 #define SPDK_CONFIG_MAX_LCORES 128 00:09:50.416 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:50.416 #define SPDK_CONFIG_NVME_CUSE 1 00:09:50.416 #undef SPDK_CONFIG_OCF 00:09:50.416 #define SPDK_CONFIG_OCF_PATH 00:09:50.416 #define SPDK_CONFIG_OPENSSL_PATH 00:09:50.416 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:50.416 #define SPDK_CONFIG_PGO_DIR 00:09:50.416 #undef SPDK_CONFIG_PGO_USE 00:09:50.416 #define SPDK_CONFIG_PREFIX /usr/local 00:09:50.416 #undef SPDK_CONFIG_RAID5F 00:09:50.416 #undef SPDK_CONFIG_RBD 00:09:50.416 #define SPDK_CONFIG_RDMA 1 00:09:50.416 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:50.416 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:50.416 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:50.416 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:50.416 #define SPDK_CONFIG_SHARED 1 00:09:50.416 #undef SPDK_CONFIG_SMA 00:09:50.416 #define SPDK_CONFIG_TESTS 1 00:09:50.416 #undef SPDK_CONFIG_TSAN 00:09:50.416 #define SPDK_CONFIG_UBLK 1 00:09:50.416 #define SPDK_CONFIG_UBSAN 1 00:09:50.416 #undef SPDK_CONFIG_UNIT_TESTS 00:09:50.416 #undef SPDK_CONFIG_URING 00:09:50.416 #define SPDK_CONFIG_URING_PATH 00:09:50.416 #undef SPDK_CONFIG_URING_ZNS 00:09:50.416 #undef SPDK_CONFIG_USDT 00:09:50.416 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:50.416 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:50.416 #define SPDK_CONFIG_VFIO_USER 1 00:09:50.416 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:50.416 #define SPDK_CONFIG_VHOST 1 00:09:50.416 #define SPDK_CONFIG_VIRTIO 1 00:09:50.416 #undef SPDK_CONFIG_VTUNE 00:09:50.416 #define SPDK_CONFIG_VTUNE_DIR 00:09:50.416 #define SPDK_CONFIG_WERROR 1 00:09:50.416 #define SPDK_CONFIG_WPDK_DIR 00:09:50.416 #undef SPDK_CONFIG_XNVME 00:09:50.416 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.416 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:50.417 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:50.418 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1363275 ]] 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1363275 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.iAFBqV 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.iAFBqV/tests/target /tmp/spdk.iAFBqV 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189202935808 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963981824 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6761046016 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971957760 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981988864 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:50.419 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169753088 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192797184 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981530112 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981992960 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=462848 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:50.420 * Looking for test storage... 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189202935808 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8975638528 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:50.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.420 --rc genhtml_branch_coverage=1 00:09:50.420 --rc genhtml_function_coverage=1 00:09:50.420 --rc genhtml_legend=1 00:09:50.420 --rc geninfo_all_blocks=1 00:09:50.420 --rc geninfo_unexecuted_blocks=1 00:09:50.420 00:09:50.420 ' 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:50.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.420 --rc genhtml_branch_coverage=1 00:09:50.420 --rc genhtml_function_coverage=1 00:09:50.420 --rc genhtml_legend=1 00:09:50.420 --rc geninfo_all_blocks=1 00:09:50.420 --rc geninfo_unexecuted_blocks=1 00:09:50.420 00:09:50.420 ' 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:50.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.420 --rc genhtml_branch_coverage=1 00:09:50.420 --rc genhtml_function_coverage=1 00:09:50.420 --rc genhtml_legend=1 00:09:50.420 --rc geninfo_all_blocks=1 00:09:50.420 --rc geninfo_unexecuted_blocks=1 00:09:50.420 00:09:50.420 ' 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:50.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.420 --rc genhtml_branch_coverage=1 00:09:50.420 --rc genhtml_function_coverage=1 00:09:50.420 --rc genhtml_legend=1 00:09:50.420 --rc geninfo_all_blocks=1 00:09:50.420 --rc geninfo_unexecuted_blocks=1 00:09:50.420 00:09:50.420 ' 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:50.420 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.421 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.421 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.421 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.421 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.421 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.421 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.421 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.421 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:50.680 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:50.681 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.253 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:57.254 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:57.254 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:57.254 Found net devices under 0000:86:00.0: cvl_0_0 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:57.254 Found net devices under 0000:86:00.1: cvl_0_1 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:57.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:09:57.254 00:09:57.254 --- 10.0.0.2 ping statistics --- 00:09:57.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.254 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:57.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:09:57.254 00:09:57.254 --- 10.0.0.1 ping statistics --- 00:09:57.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.254 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:57.254 ************************************ 00:09:57.254 START TEST nvmf_filesystem_no_in_capsule 00:09:57.254 ************************************ 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.254 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1366528 00:09:57.255 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:57.255 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1366528 00:09:57.255 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1366528 ']' 00:09:57.255 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.255 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.255 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.255 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.255 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.255 [2024-11-17 14:19:45.784348] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:09:57.255 [2024-11-17 14:19:45.784396] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.255 [2024-11-17 14:19:45.862348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.255 [2024-11-17 14:19:45.904738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.255 [2024-11-17 14:19:45.904777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.255 [2024-11-17 14:19:45.904784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.255 [2024-11-17 14:19:45.904791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.255 [2024-11-17 14:19:45.904796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.255 [2024-11-17 14:19:45.906419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.255 [2024-11-17 14:19:45.906455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.255 [2024-11-17 14:19:45.906565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.255 [2024-11-17 14:19:45.906566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.255 [2024-11-17 14:19:46.050956] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.255 Malloc1 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.255 [2024-11-17 14:19:46.189885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:57.255 { 00:09:57.255 "name": "Malloc1", 00:09:57.255 "aliases": [ 00:09:57.255 "d3e460d9-abad-44a3-9f3a-6aaf685b816e" 00:09:57.255 ], 00:09:57.255 "product_name": "Malloc disk", 00:09:57.255 "block_size": 512, 00:09:57.255 "num_blocks": 1048576, 00:09:57.255 "uuid": "d3e460d9-abad-44a3-9f3a-6aaf685b816e", 00:09:57.255 "assigned_rate_limits": { 00:09:57.255 "rw_ios_per_sec": 0, 00:09:57.255 "rw_mbytes_per_sec": 0, 00:09:57.255 "r_mbytes_per_sec": 0, 00:09:57.255 "w_mbytes_per_sec": 0 00:09:57.255 }, 00:09:57.255 "claimed": true, 00:09:57.255 "claim_type": "exclusive_write", 00:09:57.255 "zoned": false, 00:09:57.255 "supported_io_types": { 00:09:57.255 "read": true, 00:09:57.255 "write": true, 00:09:57.255 "unmap": true, 00:09:57.255 "flush": true, 00:09:57.255 "reset": true, 00:09:57.255 "nvme_admin": false, 00:09:57.255 "nvme_io": false, 00:09:57.255 "nvme_io_md": false, 00:09:57.255 "write_zeroes": true, 00:09:57.255 "zcopy": true, 00:09:57.255 "get_zone_info": false, 00:09:57.255 "zone_management": false, 00:09:57.255 "zone_append": false, 00:09:57.255 "compare": false, 00:09:57.255 "compare_and_write": false, 00:09:57.255 "abort": true, 00:09:57.255 "seek_hole": false, 00:09:57.255 "seek_data": false, 00:09:57.255 "copy": true, 00:09:57.255 "nvme_iov_md": false 00:09:57.255 }, 00:09:57.255 "memory_domains": [ 00:09:57.255 { 00:09:57.255 "dma_device_id": "system", 00:09:57.255 "dma_device_type": 1 00:09:57.255 }, 00:09:57.255 { 00:09:57.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.255 "dma_device_type": 2 00:09:57.255 } 00:09:57.255 ], 00:09:57.255 "driver_specific": {} 00:09:57.255 } 00:09:57.255 ]' 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:57.255 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:58.193 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:58.193 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:58.193 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:58.193 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:58.193 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:00.736 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:00.736 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:00.736 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:00.736 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:00.736 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:00.736 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:00.736 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:00.736 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:00.736 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:00.736 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:00.736 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:00.736 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:00.736 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:00.736 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:00.736 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:00.736 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:00.736 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:00.736 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:00.736 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:01.673 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:01.673 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:01.673 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:01.673 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.673 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.933 ************************************ 00:10:01.933 START TEST filesystem_ext4 00:10:01.933 ************************************ 00:10:01.933 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:01.933 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:01.933 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:01.933 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:01.933 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:01.933 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:01.933 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:01.933 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:01.933 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:01.933 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:01.933 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:01.933 mke2fs 1.47.0 (5-Feb-2023) 00:10:01.933 Discarding device blocks: 0/522240 done 00:10:01.933 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:01.933 Filesystem UUID: f3391e77-f2eb-4563-b7af-0a8cae1bc2df 00:10:01.933 Superblock backups stored on blocks: 00:10:01.933 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:01.933 00:10:01.933 Allocating group tables: 0/64 done 00:10:01.933 Writing inode tables: 0/64 done 00:10:02.192 Creating journal (8192 blocks): done 00:10:02.192 Writing superblocks and filesystem accounting information: 0/64 done 00:10:02.192 00:10:02.192 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:02.192 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:07.469 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:07.469 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:07.469 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:07.469 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1366528 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:07.470 00:10:07.470 real 0m5.694s 00:10:07.470 user 0m0.017s 00:10:07.470 sys 0m0.081s 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:07.470 ************************************ 00:10:07.470 END TEST filesystem_ext4 00:10:07.470 ************************************ 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.470 ************************************ 00:10:07.470 START TEST filesystem_btrfs 00:10:07.470 ************************************ 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:07.470 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:07.729 btrfs-progs v6.8.1 00:10:07.729 See https://btrfs.readthedocs.io for more information. 00:10:07.729 00:10:07.729 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:07.729 NOTE: several default settings have changed in version 5.15, please make sure 00:10:07.729 this does not affect your deployments: 00:10:07.729 - DUP for metadata (-m dup) 00:10:07.729 - enabled no-holes (-O no-holes) 00:10:07.729 - enabled free-space-tree (-R free-space-tree) 00:10:07.729 00:10:07.729 Label: (null) 00:10:07.729 UUID: 8f4b4928-84f9-4199-8fde-7110883d95b7 00:10:07.729 Node size: 16384 00:10:07.729 Sector size: 4096 (CPU page size: 4096) 00:10:07.729 Filesystem size: 510.00MiB 00:10:07.729 Block group profiles: 00:10:07.729 Data: single 8.00MiB 00:10:07.729 Metadata: DUP 32.00MiB 00:10:07.729 System: DUP 8.00MiB 00:10:07.729 SSD detected: yes 00:10:07.729 Zoned device: no 00:10:07.729 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:07.729 Checksum: crc32c 00:10:07.729 Number of devices: 1 00:10:07.729 Devices: 00:10:07.729 ID SIZE PATH 00:10:07.729 1 510.00MiB /dev/nvme0n1p1 00:10:07.729 00:10:07.729 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:07.729 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:08.666 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:08.666 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:08.666 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:08.666 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:08.666 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:08.666 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:08.666 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1366528 00:10:08.666 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:08.666 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:08.667 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:08.667 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:08.667 00:10:08.667 real 0m1.029s 00:10:08.667 user 0m0.022s 00:10:08.667 sys 0m0.116s 00:10:08.667 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.667 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:08.667 ************************************ 00:10:08.667 END TEST filesystem_btrfs 00:10:08.667 ************************************ 00:10:08.667 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:08.667 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:08.667 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.667 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.667 ************************************ 00:10:08.667 START TEST filesystem_xfs 00:10:08.667 ************************************ 00:10:08.667 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:08.667 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:08.667 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:08.667 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:08.667 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:08.667 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:08.667 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:08.667 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:08.667 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:08.667 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:08.667 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:08.667 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:08.667 = sectsz=512 attr=2, projid32bit=1 00:10:08.667 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:08.667 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:08.667 data = bsize=4096 blocks=130560, imaxpct=25 00:10:08.667 = sunit=0 swidth=0 blks 00:10:08.667 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:08.667 log =internal log bsize=4096 blocks=16384, version=2 00:10:08.667 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:08.667 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:10.045 Discarding blocks...Done. 00:10:10.045 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:10.045 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:11.950 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:11.950 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:11.950 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:11.950 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:11.950 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:11.950 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:11.950 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1366528 00:10:11.950 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:11.950 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:11.950 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:11.950 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:11.950 00:10:11.950 real 0m3.396s 00:10:11.950 user 0m0.024s 00:10:11.950 sys 0m0.076s 00:10:11.950 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.950 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:11.950 ************************************ 00:10:11.950 END TEST filesystem_xfs 00:10:11.950 ************************************ 00:10:12.209 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:12.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1366528 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1366528 ']' 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1366528 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1366528 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1366528' 00:10:12.468 killing process with pid 1366528 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1366528 00:10:12.468 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1366528 00:10:13.035 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:13.035 00:10:13.035 real 0m16.249s 00:10:13.035 user 1m3.886s 00:10:13.035 sys 0m1.400s 00:10:13.035 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.035 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.035 ************************************ 00:10:13.035 END TEST nvmf_filesystem_no_in_capsule 00:10:13.035 ************************************ 00:10:13.035 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:13.035 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:13.035 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.035 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.035 ************************************ 00:10:13.035 START TEST nvmf_filesystem_in_capsule 00:10:13.035 ************************************ 00:10:13.035 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:13.035 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:13.035 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:13.035 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:13.035 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:13.035 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.035 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1369304 00:10:13.035 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1369304 00:10:13.035 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:13.035 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1369304 ']' 00:10:13.035 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.035 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.035 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.035 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.035 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.035 [2024-11-17 14:20:02.115498] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:10:13.035 [2024-11-17 14:20:02.115543] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.035 [2024-11-17 14:20:02.196623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:13.035 [2024-11-17 14:20:02.235008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.035 [2024-11-17 14:20:02.235046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.036 [2024-11-17 14:20:02.235054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.036 [2024-11-17 14:20:02.235060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.036 [2024-11-17 14:20:02.235065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.036 [2024-11-17 14:20:02.236696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.036 [2024-11-17 14:20:02.236804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.036 [2024-11-17 14:20:02.236910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.036 [2024-11-17 14:20:02.236911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.295 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.295 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:13.295 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:13.295 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:13.295 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.295 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.295 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:13.295 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:13.295 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.295 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.295 [2024-11-17 14:20:02.382752] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.295 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.295 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:13.295 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.295 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.295 Malloc1 00:10:13.295 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.295 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:13.295 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.295 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.555 [2024-11-17 14:20:02.531009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:13.555 { 00:10:13.555 "name": "Malloc1", 00:10:13.555 "aliases": [ 00:10:13.555 "d7c65b73-69e4-4fff-aa38-397624c2cc60" 00:10:13.555 ], 00:10:13.555 "product_name": "Malloc disk", 00:10:13.555 "block_size": 512, 00:10:13.555 "num_blocks": 1048576, 00:10:13.555 "uuid": "d7c65b73-69e4-4fff-aa38-397624c2cc60", 00:10:13.555 "assigned_rate_limits": { 00:10:13.555 "rw_ios_per_sec": 0, 00:10:13.555 "rw_mbytes_per_sec": 0, 00:10:13.555 "r_mbytes_per_sec": 0, 00:10:13.555 "w_mbytes_per_sec": 0 00:10:13.555 }, 00:10:13.555 "claimed": true, 00:10:13.555 "claim_type": "exclusive_write", 00:10:13.555 "zoned": false, 00:10:13.555 "supported_io_types": { 00:10:13.555 "read": true, 00:10:13.555 "write": true, 00:10:13.555 "unmap": true, 00:10:13.555 "flush": true, 00:10:13.555 "reset": true, 00:10:13.555 "nvme_admin": false, 00:10:13.555 "nvme_io": false, 00:10:13.555 "nvme_io_md": false, 00:10:13.555 "write_zeroes": true, 00:10:13.555 "zcopy": true, 00:10:13.555 "get_zone_info": false, 00:10:13.555 "zone_management": false, 00:10:13.555 "zone_append": false, 00:10:13.555 "compare": false, 00:10:13.555 "compare_and_write": false, 00:10:13.555 "abort": true, 00:10:13.555 "seek_hole": false, 00:10:13.555 "seek_data": false, 00:10:13.555 "copy": true, 00:10:13.555 "nvme_iov_md": false 00:10:13.555 }, 00:10:13.555 "memory_domains": [ 00:10:13.555 { 00:10:13.555 "dma_device_id": "system", 00:10:13.555 "dma_device_type": 1 00:10:13.555 }, 00:10:13.555 { 00:10:13.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.555 "dma_device_type": 2 00:10:13.555 } 00:10:13.555 ], 00:10:13.555 "driver_specific": {} 00:10:13.555 } 00:10:13.555 ]' 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:13.555 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:14.935 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:14.935 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:14.935 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:14.935 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:14.935 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:16.840 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:16.840 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:16.840 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:16.840 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:16.840 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:16.840 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:16.840 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:16.840 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:16.840 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:16.840 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:16.840 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:16.840 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:16.840 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:16.840 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:16.840 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:16.840 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:16.840 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:17.099 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:17.358 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:18.293 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:18.293 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:18.293 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:18.293 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.293 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.293 ************************************ 00:10:18.293 START TEST filesystem_in_capsule_ext4 00:10:18.293 ************************************ 00:10:18.293 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:18.293 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:18.293 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:18.293 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:18.293 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:18.293 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:18.294 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:18.294 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:18.294 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:18.294 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:18.294 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:18.294 mke2fs 1.47.0 (5-Feb-2023) 00:10:18.294 Discarding device blocks: 0/522240 done 00:10:18.294 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:18.294 Filesystem UUID: 5180d948-719c-4f5c-b04d-6d1293dd13ef 00:10:18.294 Superblock backups stored on blocks: 00:10:18.294 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:18.294 00:10:18.294 Allocating group tables: 0/64 done 00:10:18.294 Writing inode tables: 0/64 done 00:10:18.552 Creating journal (8192 blocks): done 00:10:20.868 Writing superblocks and filesystem accounting information: 0/64 done 00:10:20.868 00:10:20.868 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:20.868 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:27.441 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:27.441 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:27.441 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:27.441 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:27.441 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:27.441 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:27.441 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1369304 00:10:27.441 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:27.441 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:27.441 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:27.441 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:27.441 00:10:27.441 real 0m8.329s 00:10:27.441 user 0m0.036s 00:10:27.441 sys 0m0.065s 00:10:27.441 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.441 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:27.441 ************************************ 00:10:27.441 END TEST filesystem_in_capsule_ext4 00:10:27.441 ************************************ 00:10:27.441 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:27.441 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:27.441 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.441 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.441 ************************************ 00:10:27.441 START TEST filesystem_in_capsule_btrfs 00:10:27.441 ************************************ 00:10:27.442 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:27.442 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:27.442 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:27.442 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:27.442 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:27.442 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:27.442 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:27.442 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:27.442 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:27.442 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:27.442 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:27.442 btrfs-progs v6.8.1 00:10:27.442 See https://btrfs.readthedocs.io for more information. 00:10:27.442 00:10:27.442 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:27.442 NOTE: several default settings have changed in version 5.15, please make sure 00:10:27.442 this does not affect your deployments: 00:10:27.442 - DUP for metadata (-m dup) 00:10:27.442 - enabled no-holes (-O no-holes) 00:10:27.442 - enabled free-space-tree (-R free-space-tree) 00:10:27.442 00:10:27.442 Label: (null) 00:10:27.442 UUID: 20620c0d-cc69-4067-af76-e088db2f6144 00:10:27.442 Node size: 16384 00:10:27.442 Sector size: 4096 (CPU page size: 4096) 00:10:27.442 Filesystem size: 510.00MiB 00:10:27.442 Block group profiles: 00:10:27.442 Data: single 8.00MiB 00:10:27.442 Metadata: DUP 32.00MiB 00:10:27.442 System: DUP 8.00MiB 00:10:27.442 SSD detected: yes 00:10:27.442 Zoned device: no 00:10:27.442 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:27.442 Checksum: crc32c 00:10:27.442 Number of devices: 1 00:10:27.442 Devices: 00:10:27.442 ID SIZE PATH 00:10:27.442 1 510.00MiB /dev/nvme0n1p1 00:10:27.442 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1369304 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:27.442 00:10:27.442 real 0m0.582s 00:10:27.442 user 0m0.030s 00:10:27.442 sys 0m0.108s 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:27.442 ************************************ 00:10:27.442 END TEST filesystem_in_capsule_btrfs 00:10:27.442 ************************************ 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.442 ************************************ 00:10:27.442 START TEST filesystem_in_capsule_xfs 00:10:27.442 ************************************ 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:27.442 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:27.442 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:27.442 = sectsz=512 attr=2, projid32bit=1 00:10:27.442 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:27.442 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:27.442 data = bsize=4096 blocks=130560, imaxpct=25 00:10:27.442 = sunit=0 swidth=0 blks 00:10:27.442 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:27.442 log =internal log bsize=4096 blocks=16384, version=2 00:10:27.442 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:27.442 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:28.379 Discarding blocks...Done. 00:10:28.379 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:28.379 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:30.918 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:30.918 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:30.918 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:30.918 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:30.918 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:30.918 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:30.918 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1369304 00:10:30.918 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:30.918 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:30.918 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:30.918 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:30.918 00:10:30.918 real 0m3.423s 00:10:30.918 user 0m0.025s 00:10:30.918 sys 0m0.072s 00:10:30.918 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.918 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:30.918 ************************************ 00:10:30.918 END TEST filesystem_in_capsule_xfs 00:10:30.918 ************************************ 00:10:30.918 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:30.918 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:30.918 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:30.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1369304 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1369304 ']' 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1369304 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1369304 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1369304' 00:10:30.918 killing process with pid 1369304 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1369304 00:10:30.918 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1369304 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:31.519 00:10:31.519 real 0m18.380s 00:10:31.519 user 1m12.337s 00:10:31.519 sys 0m1.429s 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.519 ************************************ 00:10:31.519 END TEST nvmf_filesystem_in_capsule 00:10:31.519 ************************************ 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:31.519 rmmod nvme_tcp 00:10:31.519 rmmod nvme_fabrics 00:10:31.519 rmmod nvme_keyring 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:31.519 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:31.520 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:31.520 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.520 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.520 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.620 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:33.620 00:10:33.620 real 0m43.397s 00:10:33.620 user 2m18.266s 00:10:33.620 sys 0m7.600s 00:10:33.620 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.620 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.620 ************************************ 00:10:33.620 END TEST nvmf_filesystem 00:10:33.620 ************************************ 00:10:33.620 14:20:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:33.620 14:20:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:33.620 14:20:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.620 14:20:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:33.620 ************************************ 00:10:33.620 START TEST nvmf_target_discovery 00:10:33.620 ************************************ 00:10:33.620 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:33.620 * Looking for test storage... 00:10:33.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.620 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:33.620 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:10:33.620 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:33.620 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:33.620 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.620 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.620 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.620 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.620 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.880 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.880 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.880 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.880 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.880 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.880 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.880 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:33.880 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:33.880 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.880 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:33.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.881 --rc genhtml_branch_coverage=1 00:10:33.881 --rc genhtml_function_coverage=1 00:10:33.881 --rc genhtml_legend=1 00:10:33.881 --rc geninfo_all_blocks=1 00:10:33.881 --rc geninfo_unexecuted_blocks=1 00:10:33.881 00:10:33.881 ' 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:33.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.881 --rc genhtml_branch_coverage=1 00:10:33.881 --rc genhtml_function_coverage=1 00:10:33.881 --rc genhtml_legend=1 00:10:33.881 --rc geninfo_all_blocks=1 00:10:33.881 --rc geninfo_unexecuted_blocks=1 00:10:33.881 00:10:33.881 ' 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:33.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.881 --rc genhtml_branch_coverage=1 00:10:33.881 --rc genhtml_function_coverage=1 00:10:33.881 --rc genhtml_legend=1 00:10:33.881 --rc geninfo_all_blocks=1 00:10:33.881 --rc geninfo_unexecuted_blocks=1 00:10:33.881 00:10:33.881 ' 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:33.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.881 --rc genhtml_branch_coverage=1 00:10:33.881 --rc genhtml_function_coverage=1 00:10:33.881 --rc genhtml_legend=1 00:10:33.881 --rc geninfo_all_blocks=1 00:10:33.881 --rc geninfo_unexecuted_blocks=1 00:10:33.881 00:10:33.881 ' 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.881 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:33.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:33.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:33.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:33.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:33.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:33.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:33.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:33.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:33.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:33.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:33.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:33.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:33.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:33.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:33.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:33.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:33.882 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:40.454 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.454 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:40.454 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:40.455 Found net devices under 0000:86:00.0: cvl_0_0 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:40.455 Found net devices under 0000:86:00.1: cvl_0_1 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:40.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:10:40.455 00:10:40.455 --- 10.0.0.2 ping statistics --- 00:10:40.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.455 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:10:40.455 00:10:40.455 --- 10.0.0.1 ping statistics --- 00:10:40.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.455 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1376050 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1376050 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1376050 ']' 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.455 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.455 [2024-11-17 14:20:28.956423] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:10:40.455 [2024-11-17 14:20:28.956479] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.455 [2024-11-17 14:20:29.037482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.455 [2024-11-17 14:20:29.080431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.455 [2024-11-17 14:20:29.080468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.455 [2024-11-17 14:20:29.080475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.455 [2024-11-17 14:20:29.080482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.455 [2024-11-17 14:20:29.080487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.455 [2024-11-17 14:20:29.081949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.455 [2024-11-17 14:20:29.082059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.455 [2024-11-17 14:20:29.082169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.455 [2024-11-17 14:20:29.082170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.455 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.455 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.456 [2024-11-17 14:20:29.223810] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.456 Null1 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.456 [2024-11-17 14:20:29.269251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.456 Null2 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.456 Null3 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.456 Null4 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.456 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:40.456 00:10:40.456 Discovery Log Number of Records 6, Generation counter 6 00:10:40.456 =====Discovery Log Entry 0====== 00:10:40.456 trtype: tcp 00:10:40.456 adrfam: ipv4 00:10:40.456 subtype: current discovery subsystem 00:10:40.456 treq: not required 00:10:40.456 portid: 0 00:10:40.456 trsvcid: 4420 00:10:40.456 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:40.456 traddr: 10.0.0.2 00:10:40.456 eflags: explicit discovery connections, duplicate discovery information 00:10:40.456 sectype: none 00:10:40.456 =====Discovery Log Entry 1====== 00:10:40.456 trtype: tcp 00:10:40.456 adrfam: ipv4 00:10:40.456 subtype: nvme subsystem 00:10:40.456 treq: not required 00:10:40.456 portid: 0 00:10:40.456 trsvcid: 4420 00:10:40.456 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:40.456 traddr: 10.0.0.2 00:10:40.456 eflags: none 00:10:40.456 sectype: none 00:10:40.456 =====Discovery Log Entry 2====== 00:10:40.456 trtype: tcp 00:10:40.456 adrfam: ipv4 00:10:40.456 subtype: nvme subsystem 00:10:40.456 treq: not required 00:10:40.457 portid: 0 00:10:40.457 trsvcid: 4420 00:10:40.457 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:40.457 traddr: 10.0.0.2 00:10:40.457 eflags: none 00:10:40.457 sectype: none 00:10:40.457 =====Discovery Log Entry 3====== 00:10:40.457 trtype: tcp 00:10:40.457 adrfam: ipv4 00:10:40.457 subtype: nvme subsystem 00:10:40.457 treq: not required 00:10:40.457 portid: 0 00:10:40.457 trsvcid: 4420 00:10:40.457 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:40.457 traddr: 10.0.0.2 00:10:40.457 eflags: none 00:10:40.457 sectype: none 00:10:40.457 =====Discovery Log Entry 4====== 00:10:40.457 trtype: tcp 00:10:40.457 adrfam: ipv4 00:10:40.457 subtype: nvme subsystem 00:10:40.457 treq: not required 00:10:40.457 portid: 0 00:10:40.457 trsvcid: 4420 00:10:40.457 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:40.457 traddr: 10.0.0.2 00:10:40.457 eflags: none 00:10:40.457 sectype: none 00:10:40.457 =====Discovery Log Entry 5====== 00:10:40.457 trtype: tcp 00:10:40.457 adrfam: ipv4 00:10:40.457 subtype: discovery subsystem referral 00:10:40.457 treq: not required 00:10:40.457 portid: 0 00:10:40.457 trsvcid: 4430 00:10:40.457 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:40.457 traddr: 10.0.0.2 00:10:40.457 eflags: none 00:10:40.457 sectype: none 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:40.457 Perform nvmf subsystem discovery via RPC 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.457 [ 00:10:40.457 { 00:10:40.457 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:40.457 "subtype": "Discovery", 00:10:40.457 "listen_addresses": [ 00:10:40.457 { 00:10:40.457 "trtype": "TCP", 00:10:40.457 "adrfam": "IPv4", 00:10:40.457 "traddr": "10.0.0.2", 00:10:40.457 "trsvcid": "4420" 00:10:40.457 } 00:10:40.457 ], 00:10:40.457 "allow_any_host": true, 00:10:40.457 "hosts": [] 00:10:40.457 }, 00:10:40.457 { 00:10:40.457 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:40.457 "subtype": "NVMe", 00:10:40.457 "listen_addresses": [ 00:10:40.457 { 00:10:40.457 "trtype": "TCP", 00:10:40.457 "adrfam": "IPv4", 00:10:40.457 "traddr": "10.0.0.2", 00:10:40.457 "trsvcid": "4420" 00:10:40.457 } 00:10:40.457 ], 00:10:40.457 "allow_any_host": true, 00:10:40.457 "hosts": [], 00:10:40.457 "serial_number": "SPDK00000000000001", 00:10:40.457 "model_number": "SPDK bdev Controller", 00:10:40.457 "max_namespaces": 32, 00:10:40.457 "min_cntlid": 1, 00:10:40.457 "max_cntlid": 65519, 00:10:40.457 "namespaces": [ 00:10:40.457 { 00:10:40.457 "nsid": 1, 00:10:40.457 "bdev_name": "Null1", 00:10:40.457 "name": "Null1", 00:10:40.457 "nguid": "0CE1AAB71D994BB0BCABFD6DAB800CCD", 00:10:40.457 "uuid": "0ce1aab7-1d99-4bb0-bcab-fd6dab800ccd" 00:10:40.457 } 00:10:40.457 ] 00:10:40.457 }, 00:10:40.457 { 00:10:40.457 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:40.457 "subtype": "NVMe", 00:10:40.457 "listen_addresses": [ 00:10:40.457 { 00:10:40.457 "trtype": "TCP", 00:10:40.457 "adrfam": "IPv4", 00:10:40.457 "traddr": "10.0.0.2", 00:10:40.457 "trsvcid": "4420" 00:10:40.457 } 00:10:40.457 ], 00:10:40.457 "allow_any_host": true, 00:10:40.457 "hosts": [], 00:10:40.457 "serial_number": "SPDK00000000000002", 00:10:40.457 "model_number": "SPDK bdev Controller", 00:10:40.457 "max_namespaces": 32, 00:10:40.457 "min_cntlid": 1, 00:10:40.457 "max_cntlid": 65519, 00:10:40.457 "namespaces": [ 00:10:40.457 { 00:10:40.457 "nsid": 1, 00:10:40.457 "bdev_name": "Null2", 00:10:40.457 "name": "Null2", 00:10:40.457 "nguid": "B76F10AE4518450A97CBB6CA67291D3A", 00:10:40.457 "uuid": "b76f10ae-4518-450a-97cb-b6ca67291d3a" 00:10:40.457 } 00:10:40.457 ] 00:10:40.457 }, 00:10:40.457 { 00:10:40.457 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:40.457 "subtype": "NVMe", 00:10:40.457 "listen_addresses": [ 00:10:40.457 { 00:10:40.457 "trtype": "TCP", 00:10:40.457 "adrfam": "IPv4", 00:10:40.457 "traddr": "10.0.0.2", 00:10:40.457 "trsvcid": "4420" 00:10:40.457 } 00:10:40.457 ], 00:10:40.457 "allow_any_host": true, 00:10:40.457 "hosts": [], 00:10:40.457 "serial_number": "SPDK00000000000003", 00:10:40.457 "model_number": "SPDK bdev Controller", 00:10:40.457 "max_namespaces": 32, 00:10:40.457 "min_cntlid": 1, 00:10:40.457 "max_cntlid": 65519, 00:10:40.457 "namespaces": [ 00:10:40.457 { 00:10:40.457 "nsid": 1, 00:10:40.457 "bdev_name": "Null3", 00:10:40.457 "name": "Null3", 00:10:40.457 "nguid": "93C01C3F6AA844D2B6729882114E7C0D", 00:10:40.457 "uuid": "93c01c3f-6aa8-44d2-b672-9882114e7c0d" 00:10:40.457 } 00:10:40.457 ] 00:10:40.457 }, 00:10:40.457 { 00:10:40.457 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:40.457 "subtype": "NVMe", 00:10:40.457 "listen_addresses": [ 00:10:40.457 { 00:10:40.457 "trtype": "TCP", 00:10:40.457 "adrfam": "IPv4", 00:10:40.457 "traddr": "10.0.0.2", 00:10:40.457 "trsvcid": "4420" 00:10:40.457 } 00:10:40.457 ], 00:10:40.457 "allow_any_host": true, 00:10:40.457 "hosts": [], 00:10:40.457 "serial_number": "SPDK00000000000004", 00:10:40.457 "model_number": "SPDK bdev Controller", 00:10:40.457 "max_namespaces": 32, 00:10:40.457 "min_cntlid": 1, 00:10:40.457 "max_cntlid": 65519, 00:10:40.457 "namespaces": [ 00:10:40.457 { 00:10:40.457 "nsid": 1, 00:10:40.457 "bdev_name": "Null4", 00:10:40.457 "name": "Null4", 00:10:40.457 "nguid": "F258947B4A4244849114932BDF9E5111", 00:10:40.457 "uuid": "f258947b-4a42-4484-9114-932bdf9e5111" 00:10:40.457 } 00:10:40.457 ] 00:10:40.457 } 00:10:40.457 ] 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.457 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.716 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.716 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:40.716 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.716 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.716 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.716 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:40.716 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:40.716 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.716 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.716 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.716 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:40.716 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:40.717 rmmod nvme_tcp 00:10:40.717 rmmod nvme_fabrics 00:10:40.717 rmmod nvme_keyring 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1376050 ']' 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1376050 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1376050 ']' 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1376050 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1376050 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1376050' 00:10:40.717 killing process with pid 1376050 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1376050 00:10:40.717 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1376050 00:10:40.976 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:40.976 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:40.976 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:40.976 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:40.976 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:40.976 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:40.976 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:40.976 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:40.976 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:40.976 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.976 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.976 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.884 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:43.144 00:10:43.144 real 0m9.427s 00:10:43.144 user 0m5.717s 00:10:43.144 sys 0m4.865s 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.144 ************************************ 00:10:43.144 END TEST nvmf_target_discovery 00:10:43.144 ************************************ 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:43.144 ************************************ 00:10:43.144 START TEST nvmf_referrals 00:10:43.144 ************************************ 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:43.144 * Looking for test storage... 00:10:43.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:43.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.144 --rc genhtml_branch_coverage=1 00:10:43.144 --rc genhtml_function_coverage=1 00:10:43.144 --rc genhtml_legend=1 00:10:43.144 --rc geninfo_all_blocks=1 00:10:43.144 --rc geninfo_unexecuted_blocks=1 00:10:43.144 00:10:43.144 ' 00:10:43.144 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:43.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.144 --rc genhtml_branch_coverage=1 00:10:43.144 --rc genhtml_function_coverage=1 00:10:43.145 --rc genhtml_legend=1 00:10:43.145 --rc geninfo_all_blocks=1 00:10:43.145 --rc geninfo_unexecuted_blocks=1 00:10:43.145 00:10:43.145 ' 00:10:43.145 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:43.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.145 --rc genhtml_branch_coverage=1 00:10:43.145 --rc genhtml_function_coverage=1 00:10:43.145 --rc genhtml_legend=1 00:10:43.145 --rc geninfo_all_blocks=1 00:10:43.145 --rc geninfo_unexecuted_blocks=1 00:10:43.145 00:10:43.145 ' 00:10:43.145 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:43.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.145 --rc genhtml_branch_coverage=1 00:10:43.145 --rc genhtml_function_coverage=1 00:10:43.145 --rc genhtml_legend=1 00:10:43.145 --rc geninfo_all_blocks=1 00:10:43.145 --rc geninfo_unexecuted_blocks=1 00:10:43.145 00:10:43.145 ' 00:10:43.145 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.145 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:43.145 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.145 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.145 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.145 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.145 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.145 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.145 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.145 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:43.405 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:49.981 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:49.981 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:49.981 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:49.982 Found net devices under 0000:86:00.0: cvl_0_0 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:49.982 Found net devices under 0000:86:00.1: cvl_0_1 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:49.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:10:49.982 00:10:49.982 --- 10.0.0.2 ping statistics --- 00:10:49.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.982 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:49.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:10:49.982 00:10:49.982 --- 10.0.0.1 ping statistics --- 00:10:49.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.982 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1379827 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1379827 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1379827 ']' 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.982 [2024-11-17 14:20:38.409600] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:10:49.982 [2024-11-17 14:20:38.409656] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.982 [2024-11-17 14:20:38.489407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:49.982 [2024-11-17 14:20:38.531836] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.982 [2024-11-17 14:20:38.531875] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.982 [2024-11-17 14:20:38.531883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.982 [2024-11-17 14:20:38.531889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.982 [2024-11-17 14:20:38.531893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.982 [2024-11-17 14:20:38.533504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.982 [2024-11-17 14:20:38.533610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.982 [2024-11-17 14:20:38.533719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.982 [2024-11-17 14:20:38.533720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.982 [2024-11-17 14:20:38.671291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.982 [2024-11-17 14:20:38.684653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:49.982 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.983 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.983 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:49.983 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:49.983 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:49.983 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:49.983 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:49.983 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:49.983 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:50.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:50.501 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:50.501 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:50.501 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:50.501 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:50.501 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:50.501 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:50.501 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:50.501 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:50.501 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:50.501 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:50.501 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:50.501 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:50.501 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:50.760 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:50.760 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:50.760 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.760 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.760 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.760 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:50.760 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:50.760 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:50.760 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:50.760 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.760 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.760 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:50.760 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.760 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:50.760 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:50.760 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:50.760 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:50.760 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:50.760 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:50.760 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:50.760 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:51.019 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:51.019 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:51.019 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:51.019 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:51.019 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:51.019 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:51.019 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:51.278 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:51.537 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:51.537 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:51.537 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:51.537 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:51.537 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:51.537 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:51.537 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:51.537 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:51.537 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:51.537 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:51.537 rmmod nvme_tcp 00:10:51.537 rmmod nvme_fabrics 00:10:51.537 rmmod nvme_keyring 00:10:51.537 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:51.537 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:51.537 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:51.537 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1379827 ']' 00:10:51.537 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1379827 00:10:51.537 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1379827 ']' 00:10:51.537 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1379827 00:10:51.537 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:51.797 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.797 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1379827 00:10:51.797 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.797 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.797 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1379827' 00:10:51.797 killing process with pid 1379827 00:10:51.797 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1379827 00:10:51.797 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1379827 00:10:51.797 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:51.797 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:51.797 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:51.797 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:51.797 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:51.797 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:51.797 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:51.797 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:51.797 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:51.797 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.797 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.797 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:54.335 00:10:54.335 real 0m10.868s 00:10:54.335 user 0m12.305s 00:10:54.335 sys 0m5.188s 00:10:54.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.335 ************************************ 00:10:54.335 END TEST nvmf_referrals 00:10:54.335 ************************************ 00:10:54.335 14:20:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:54.335 14:20:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:54.335 14:20:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.335 14:20:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:54.335 ************************************ 00:10:54.335 START TEST nvmf_connect_disconnect 00:10:54.335 ************************************ 00:10:54.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:54.335 * Looking for test storage... 00:10:54.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:54.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:10:54.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:54.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:54.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:54.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.336 --rc genhtml_branch_coverage=1 00:10:54.336 --rc genhtml_function_coverage=1 00:10:54.336 --rc genhtml_legend=1 00:10:54.336 --rc geninfo_all_blocks=1 00:10:54.336 --rc geninfo_unexecuted_blocks=1 00:10:54.336 00:10:54.336 ' 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:54.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.336 --rc genhtml_branch_coverage=1 00:10:54.336 --rc genhtml_function_coverage=1 00:10:54.336 --rc genhtml_legend=1 00:10:54.336 --rc geninfo_all_blocks=1 00:10:54.336 --rc geninfo_unexecuted_blocks=1 00:10:54.336 00:10:54.336 ' 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:54.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.336 --rc genhtml_branch_coverage=1 00:10:54.336 --rc genhtml_function_coverage=1 00:10:54.336 --rc genhtml_legend=1 00:10:54.336 --rc geninfo_all_blocks=1 00:10:54.336 --rc geninfo_unexecuted_blocks=1 00:10:54.336 00:10:54.336 ' 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:54.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.336 --rc genhtml_branch_coverage=1 00:10:54.336 --rc genhtml_function_coverage=1 00:10:54.336 --rc genhtml_legend=1 00:10:54.336 --rc geninfo_all_blocks=1 00:10:54.336 --rc geninfo_unexecuted_blocks=1 00:10:54.336 00:10:54.336 ' 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:54.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:54.336 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:54.337 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.337 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:54.337 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:54.337 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:54.337 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.337 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.337 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.337 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:54.337 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:54.337 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:54.337 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:00.908 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:00.908 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:00.908 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:00.909 Found net devices under 0000:86:00.0: cvl_0_0 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:00.909 Found net devices under 0000:86:00.1: cvl_0_1 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:00.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:11:00.909 00:11:00.909 --- 10.0.0.2 ping statistics --- 00:11:00.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.909 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:11:00.909 00:11:00.909 --- 10.0.0.1 ping statistics --- 00:11:00.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.909 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1383907 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1383907 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1383907 ']' 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:00.909 [2024-11-17 14:20:49.386271] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:11:00.909 [2024-11-17 14:20:49.386319] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.909 [2024-11-17 14:20:49.464760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.909 [2024-11-17 14:20:49.505513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.909 [2024-11-17 14:20:49.505553] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.909 [2024-11-17 14:20:49.505563] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.909 [2024-11-17 14:20:49.505570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.909 [2024-11-17 14:20:49.505575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.909 [2024-11-17 14:20:49.507125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.909 [2024-11-17 14:20:49.507235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.909 [2024-11-17 14:20:49.507340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.909 [2024-11-17 14:20:49.507341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.909 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:00.909 [2024-11-17 14:20:49.652668] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.910 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.910 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:00.910 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.910 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:00.910 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.910 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:00.910 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:00.910 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.910 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:00.910 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.910 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:00.910 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.910 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:00.910 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.910 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.910 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.910 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:00.910 [2024-11-17 14:20:49.715875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.910 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.910 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:00.910 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:00.910 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:04.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.381 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:17.381 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:17.381 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:17.381 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:17.381 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:17.381 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:17.381 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:17.381 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:17.381 rmmod nvme_tcp 00:11:17.381 rmmod nvme_fabrics 00:11:17.381 rmmod nvme_keyring 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1383907 ']' 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1383907 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1383907 ']' 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1383907 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1383907 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1383907' 00:11:17.382 killing process with pid 1383907 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1383907 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1383907 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.382 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.288 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.288 00:11:19.288 real 0m25.336s 00:11:19.288 user 1m8.678s 00:11:19.288 sys 0m5.849s 00:11:19.288 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.288 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:19.288 ************************************ 00:11:19.288 END TEST nvmf_connect_disconnect 00:11:19.288 ************************************ 00:11:19.288 14:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:19.288 14:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.288 14:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.288 14:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.548 ************************************ 00:11:19.548 START TEST nvmf_multitarget 00:11:19.548 ************************************ 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:19.549 * Looking for test storage... 00:11:19.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:19.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.549 --rc genhtml_branch_coverage=1 00:11:19.549 --rc genhtml_function_coverage=1 00:11:19.549 --rc genhtml_legend=1 00:11:19.549 --rc geninfo_all_blocks=1 00:11:19.549 --rc geninfo_unexecuted_blocks=1 00:11:19.549 00:11:19.549 ' 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:19.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.549 --rc genhtml_branch_coverage=1 00:11:19.549 --rc genhtml_function_coverage=1 00:11:19.549 --rc genhtml_legend=1 00:11:19.549 --rc geninfo_all_blocks=1 00:11:19.549 --rc geninfo_unexecuted_blocks=1 00:11:19.549 00:11:19.549 ' 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:19.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.549 --rc genhtml_branch_coverage=1 00:11:19.549 --rc genhtml_function_coverage=1 00:11:19.549 --rc genhtml_legend=1 00:11:19.549 --rc geninfo_all_blocks=1 00:11:19.549 --rc geninfo_unexecuted_blocks=1 00:11:19.549 00:11:19.549 ' 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:19.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.549 --rc genhtml_branch_coverage=1 00:11:19.549 --rc genhtml_function_coverage=1 00:11:19.549 --rc genhtml_legend=1 00:11:19.549 --rc geninfo_all_blocks=1 00:11:19.549 --rc geninfo_unexecuted_blocks=1 00:11:19.549 00:11:19.549 ' 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.549 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.550 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.550 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.550 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.550 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:19.550 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:19.550 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.550 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.550 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.550 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.550 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.550 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.550 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.550 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.550 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:19.550 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:19.550 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.550 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:26.133 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:26.133 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:26.133 Found net devices under 0000:86:00.0: cvl_0_0 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:26.133 Found net devices under 0000:86:00.1: cvl_0_1 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:26.133 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:26.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:11:26.134 00:11:26.134 --- 10.0.0.2 ping statistics --- 00:11:26.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.134 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:11:26.134 00:11:26.134 --- 10.0.0.1 ping statistics --- 00:11:26.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.134 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1390816 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1390816 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1390816 ']' 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:26.134 [2024-11-17 14:21:14.764449] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:11:26.134 [2024-11-17 14:21:14.764497] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.134 [2024-11-17 14:21:14.843298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.134 [2024-11-17 14:21:14.884293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.134 [2024-11-17 14:21:14.884332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.134 [2024-11-17 14:21:14.884339] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.134 [2024-11-17 14:21:14.884345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.134 [2024-11-17 14:21:14.884350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.134 [2024-11-17 14:21:14.885900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.134 [2024-11-17 14:21:14.886009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.134 [2024-11-17 14:21:14.886120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.134 [2024-11-17 14:21:14.886121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:26.134 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:26.134 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.134 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:26.134 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:26.134 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:26.134 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:26.134 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:26.134 "nvmf_tgt_1" 00:11:26.134 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:26.134 "nvmf_tgt_2" 00:11:26.134 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:26.134 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:26.393 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:26.393 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:26.393 true 00:11:26.393 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:26.653 true 00:11:26.653 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:26.653 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:26.653 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:26.653 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:26.653 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:26.653 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:26.653 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:26.653 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.653 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:26.653 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.653 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.653 rmmod nvme_tcp 00:11:26.653 rmmod nvme_fabrics 00:11:26.653 rmmod nvme_keyring 00:11:26.653 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.653 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:26.653 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:26.653 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1390816 ']' 00:11:26.653 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1390816 00:11:26.653 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1390816 ']' 00:11:26.653 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1390816 00:11:26.653 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:26.653 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.653 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1390816 00:11:26.913 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:26.913 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:26.913 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1390816' 00:11:26.913 killing process with pid 1390816 00:11:26.913 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1390816 00:11:26.913 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1390816 00:11:26.913 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:26.913 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:26.913 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:26.913 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:26.913 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:26.913 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:26.913 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:26.913 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:26.913 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:26.913 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.913 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.913 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:29.452 00:11:29.452 real 0m9.612s 00:11:29.452 user 0m7.176s 00:11:29.452 sys 0m4.936s 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:29.452 ************************************ 00:11:29.452 END TEST nvmf_multitarget 00:11:29.452 ************************************ 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:29.452 ************************************ 00:11:29.452 START TEST nvmf_rpc 00:11:29.452 ************************************ 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:29.452 * Looking for test storage... 00:11:29.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.452 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:29.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.453 --rc genhtml_branch_coverage=1 00:11:29.453 --rc genhtml_function_coverage=1 00:11:29.453 --rc genhtml_legend=1 00:11:29.453 --rc geninfo_all_blocks=1 00:11:29.453 --rc geninfo_unexecuted_blocks=1 00:11:29.453 00:11:29.453 ' 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:29.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.453 --rc genhtml_branch_coverage=1 00:11:29.453 --rc genhtml_function_coverage=1 00:11:29.453 --rc genhtml_legend=1 00:11:29.453 --rc geninfo_all_blocks=1 00:11:29.453 --rc geninfo_unexecuted_blocks=1 00:11:29.453 00:11:29.453 ' 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:29.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.453 --rc genhtml_branch_coverage=1 00:11:29.453 --rc genhtml_function_coverage=1 00:11:29.453 --rc genhtml_legend=1 00:11:29.453 --rc geninfo_all_blocks=1 00:11:29.453 --rc geninfo_unexecuted_blocks=1 00:11:29.453 00:11:29.453 ' 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:29.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.453 --rc genhtml_branch_coverage=1 00:11:29.453 --rc genhtml_function_coverage=1 00:11:29.453 --rc genhtml_legend=1 00:11:29.453 --rc geninfo_all_blocks=1 00:11:29.453 --rc geninfo_unexecuted_blocks=1 00:11:29.453 00:11:29.453 ' 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:29.453 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:36.030 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:36.030 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:36.030 Found net devices under 0000:86:00.0: cvl_0_0 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:36.030 Found net devices under 0000:86:00.1: cvl_0_1 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:36.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:11:36.030 00:11:36.030 --- 10.0.0.2 ping statistics --- 00:11:36.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.030 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:11:36.030 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:36.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:11:36.030 00:11:36.031 --- 10.0.0.1 ping statistics --- 00:11:36.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.031 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1394528 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1394528 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1394528 ']' 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.031 [2024-11-17 14:21:24.501921] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:11:36.031 [2024-11-17 14:21:24.501971] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.031 [2024-11-17 14:21:24.564744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.031 [2024-11-17 14:21:24.607657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.031 [2024-11-17 14:21:24.607695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.031 [2024-11-17 14:21:24.607702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.031 [2024-11-17 14:21:24.607709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.031 [2024-11-17 14:21:24.607714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.031 [2024-11-17 14:21:24.609189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.031 [2024-11-17 14:21:24.609298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.031 [2024-11-17 14:21:24.609405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.031 [2024-11-17 14:21:24.609407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:36.031 "tick_rate": 2300000000, 00:11:36.031 "poll_groups": [ 00:11:36.031 { 00:11:36.031 "name": "nvmf_tgt_poll_group_000", 00:11:36.031 "admin_qpairs": 0, 00:11:36.031 "io_qpairs": 0, 00:11:36.031 "current_admin_qpairs": 0, 00:11:36.031 "current_io_qpairs": 0, 00:11:36.031 "pending_bdev_io": 0, 00:11:36.031 "completed_nvme_io": 0, 00:11:36.031 "transports": [] 00:11:36.031 }, 00:11:36.031 { 00:11:36.031 "name": "nvmf_tgt_poll_group_001", 00:11:36.031 "admin_qpairs": 0, 00:11:36.031 "io_qpairs": 0, 00:11:36.031 "current_admin_qpairs": 0, 00:11:36.031 "current_io_qpairs": 0, 00:11:36.031 "pending_bdev_io": 0, 00:11:36.031 "completed_nvme_io": 0, 00:11:36.031 "transports": [] 00:11:36.031 }, 00:11:36.031 { 00:11:36.031 "name": "nvmf_tgt_poll_group_002", 00:11:36.031 "admin_qpairs": 0, 00:11:36.031 "io_qpairs": 0, 00:11:36.031 "current_admin_qpairs": 0, 00:11:36.031 "current_io_qpairs": 0, 00:11:36.031 "pending_bdev_io": 0, 00:11:36.031 "completed_nvme_io": 0, 00:11:36.031 "transports": [] 00:11:36.031 }, 00:11:36.031 { 00:11:36.031 "name": "nvmf_tgt_poll_group_003", 00:11:36.031 "admin_qpairs": 0, 00:11:36.031 "io_qpairs": 0, 00:11:36.031 "current_admin_qpairs": 0, 00:11:36.031 "current_io_qpairs": 0, 00:11:36.031 "pending_bdev_io": 0, 00:11:36.031 "completed_nvme_io": 0, 00:11:36.031 "transports": [] 00:11:36.031 } 00:11:36.031 ] 00:11:36.031 }' 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.031 [2024-11-17 14:21:24.855456] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:36.031 "tick_rate": 2300000000, 00:11:36.031 "poll_groups": [ 00:11:36.031 { 00:11:36.031 "name": "nvmf_tgt_poll_group_000", 00:11:36.031 "admin_qpairs": 0, 00:11:36.031 "io_qpairs": 0, 00:11:36.031 "current_admin_qpairs": 0, 00:11:36.031 "current_io_qpairs": 0, 00:11:36.031 "pending_bdev_io": 0, 00:11:36.031 "completed_nvme_io": 0, 00:11:36.031 "transports": [ 00:11:36.031 { 00:11:36.031 "trtype": "TCP" 00:11:36.031 } 00:11:36.031 ] 00:11:36.031 }, 00:11:36.031 { 00:11:36.031 "name": "nvmf_tgt_poll_group_001", 00:11:36.031 "admin_qpairs": 0, 00:11:36.031 "io_qpairs": 0, 00:11:36.031 "current_admin_qpairs": 0, 00:11:36.031 "current_io_qpairs": 0, 00:11:36.031 "pending_bdev_io": 0, 00:11:36.031 "completed_nvme_io": 0, 00:11:36.031 "transports": [ 00:11:36.031 { 00:11:36.031 "trtype": "TCP" 00:11:36.031 } 00:11:36.031 ] 00:11:36.031 }, 00:11:36.031 { 00:11:36.031 "name": "nvmf_tgt_poll_group_002", 00:11:36.031 "admin_qpairs": 0, 00:11:36.031 "io_qpairs": 0, 00:11:36.031 "current_admin_qpairs": 0, 00:11:36.031 "current_io_qpairs": 0, 00:11:36.031 "pending_bdev_io": 0, 00:11:36.031 "completed_nvme_io": 0, 00:11:36.031 "transports": [ 00:11:36.031 { 00:11:36.031 "trtype": "TCP" 00:11:36.031 } 00:11:36.031 ] 00:11:36.031 }, 00:11:36.031 { 00:11:36.031 "name": "nvmf_tgt_poll_group_003", 00:11:36.031 "admin_qpairs": 0, 00:11:36.031 "io_qpairs": 0, 00:11:36.031 "current_admin_qpairs": 0, 00:11:36.031 "current_io_qpairs": 0, 00:11:36.031 "pending_bdev_io": 0, 00:11:36.031 "completed_nvme_io": 0, 00:11:36.031 "transports": [ 00:11:36.031 { 00:11:36.031 "trtype": "TCP" 00:11:36.031 } 00:11:36.031 ] 00:11:36.031 } 00:11:36.031 ] 00:11:36.031 }' 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:36.031 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:36.032 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:36.032 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:36.032 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:36.032 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:36.032 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:36.032 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:36.032 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:36.032 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:36.032 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:36.032 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.032 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.032 Malloc1 00:11:36.032 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.032 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:36.032 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.032 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.032 [2024-11-17 14:21:25.031600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:36.032 [2024-11-17 14:21:25.060164] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:36.032 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:36.032 could not add new controller: failed to write to nvme-fabrics device 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.032 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:36.971 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:36.971 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:36.971 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:36.971 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:36.971 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:39.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:39.508 [2024-11-17 14:21:28.381801] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:39.508 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:39.508 could not add new controller: failed to write to nvme-fabrics device 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.508 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:40.458 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:40.458 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:40.458 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.458 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:40.458 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:42.362 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:42.362 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:42.362 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.362 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:42.362 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.362 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:42.362 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.622 [2024-11-17 14:21:31.791654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.622 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.001 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:44.001 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:44.001 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.001 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:44.001 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:45.908 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:45.908 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:45.908 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:45.908 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:45.908 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:45.908 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:45.908 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:45.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.908 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:45.908 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:45.908 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:45.908 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.908 [2024-11-17 14:21:35.056535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.908 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:47.287 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:47.287 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:47.287 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.287 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:47.287 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.194 [2024-11-17 14:21:38.349861] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.194 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:50.575 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:50.575 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:50.575 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:50.575 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:50.575 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.483 [2024-11-17 14:21:41.699097] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:52.483 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.742 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.742 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.742 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:52.742 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.742 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.742 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.742 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.122 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.122 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:54.122 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.122 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:54.122 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:56.029 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:56.029 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:56.029 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.029 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:56.029 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.029 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:56.029 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.029 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.029 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.029 [2024-11-17 14:21:45.063718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.029 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:57.408 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.408 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:57.408 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.408 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:57.408 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.314 [2024-11-17 14:21:48.472866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.314 [2024-11-17 14:21:48.520901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.314 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.574 [2024-11-17 14:21:48.569052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.574 [2024-11-17 14:21:48.617194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.574 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.575 [2024-11-17 14:21:48.665370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:59.575 "tick_rate": 2300000000, 00:11:59.575 "poll_groups": [ 00:11:59.575 { 00:11:59.575 "name": "nvmf_tgt_poll_group_000", 00:11:59.575 "admin_qpairs": 2, 00:11:59.575 "io_qpairs": 168, 00:11:59.575 "current_admin_qpairs": 0, 00:11:59.575 "current_io_qpairs": 0, 00:11:59.575 "pending_bdev_io": 0, 00:11:59.575 "completed_nvme_io": 222, 00:11:59.575 "transports": [ 00:11:59.575 { 00:11:59.575 "trtype": "TCP" 00:11:59.575 } 00:11:59.575 ] 00:11:59.575 }, 00:11:59.575 { 00:11:59.575 "name": "nvmf_tgt_poll_group_001", 00:11:59.575 "admin_qpairs": 2, 00:11:59.575 "io_qpairs": 168, 00:11:59.575 "current_admin_qpairs": 0, 00:11:59.575 "current_io_qpairs": 0, 00:11:59.575 "pending_bdev_io": 0, 00:11:59.575 "completed_nvme_io": 216, 00:11:59.575 "transports": [ 00:11:59.575 { 00:11:59.575 "trtype": "TCP" 00:11:59.575 } 00:11:59.575 ] 00:11:59.575 }, 00:11:59.575 { 00:11:59.575 "name": "nvmf_tgt_poll_group_002", 00:11:59.575 "admin_qpairs": 1, 00:11:59.575 "io_qpairs": 168, 00:11:59.575 "current_admin_qpairs": 0, 00:11:59.575 "current_io_qpairs": 0, 00:11:59.575 "pending_bdev_io": 0, 00:11:59.575 "completed_nvme_io": 268, 00:11:59.575 "transports": [ 00:11:59.575 { 00:11:59.575 "trtype": "TCP" 00:11:59.575 } 00:11:59.575 ] 00:11:59.575 }, 00:11:59.575 { 00:11:59.575 "name": "nvmf_tgt_poll_group_003", 00:11:59.575 "admin_qpairs": 2, 00:11:59.575 "io_qpairs": 168, 00:11:59.575 "current_admin_qpairs": 0, 00:11:59.575 "current_io_qpairs": 0, 00:11:59.575 "pending_bdev_io": 0, 00:11:59.575 "completed_nvme_io": 316, 00:11:59.575 "transports": [ 00:11:59.575 { 00:11:59.575 "trtype": "TCP" 00:11:59.575 } 00:11:59.575 ] 00:11:59.575 } 00:11:59.575 ] 00:11:59.575 }' 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:59.575 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:59.835 rmmod nvme_tcp 00:11:59.835 rmmod nvme_fabrics 00:11:59.835 rmmod nvme_keyring 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1394528 ']' 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1394528 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1394528 ']' 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1394528 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1394528 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1394528' 00:11:59.835 killing process with pid 1394528 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1394528 00:11:59.835 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1394528 00:12:00.102 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:00.102 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:00.102 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:00.102 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:00.102 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:00.102 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:00.102 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:00.102 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:00.102 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:00.102 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.102 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.102 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.013 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:02.013 00:12:02.013 real 0m32.982s 00:12:02.013 user 1m39.281s 00:12:02.013 sys 0m6.615s 00:12:02.013 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.013 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.013 ************************************ 00:12:02.013 END TEST nvmf_rpc 00:12:02.013 ************************************ 00:12:02.013 14:21:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:02.013 14:21:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:02.013 14:21:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.014 14:21:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:02.273 ************************************ 00:12:02.273 START TEST nvmf_invalid 00:12:02.273 ************************************ 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:02.273 * Looking for test storage... 00:12:02.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:02.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.273 --rc genhtml_branch_coverage=1 00:12:02.273 --rc genhtml_function_coverage=1 00:12:02.273 --rc genhtml_legend=1 00:12:02.273 --rc geninfo_all_blocks=1 00:12:02.273 --rc geninfo_unexecuted_blocks=1 00:12:02.273 00:12:02.273 ' 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:02.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.273 --rc genhtml_branch_coverage=1 00:12:02.273 --rc genhtml_function_coverage=1 00:12:02.273 --rc genhtml_legend=1 00:12:02.273 --rc geninfo_all_blocks=1 00:12:02.273 --rc geninfo_unexecuted_blocks=1 00:12:02.273 00:12:02.273 ' 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:02.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.273 --rc genhtml_branch_coverage=1 00:12:02.273 --rc genhtml_function_coverage=1 00:12:02.273 --rc genhtml_legend=1 00:12:02.273 --rc geninfo_all_blocks=1 00:12:02.273 --rc geninfo_unexecuted_blocks=1 00:12:02.273 00:12:02.273 ' 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:02.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.273 --rc genhtml_branch_coverage=1 00:12:02.273 --rc genhtml_function_coverage=1 00:12:02.273 --rc genhtml_legend=1 00:12:02.273 --rc geninfo_all_blocks=1 00:12:02.273 --rc geninfo_unexecuted_blocks=1 00:12:02.273 00:12:02.273 ' 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.273 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:02.274 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:02.274 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:08.848 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:08.848 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:08.848 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:08.849 Found net devices under 0000:86:00.0: cvl_0_0 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:08.849 Found net devices under 0000:86:00.1: cvl_0_1 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:08.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:12:08.849 00:12:08.849 --- 10.0.0.2 ping statistics --- 00:12:08.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.849 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:12:08.849 00:12:08.849 --- 10.0.0.1 ping statistics --- 00:12:08.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.849 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1402203 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1402203 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1402203 ']' 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.849 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:08.849 [2024-11-17 14:21:57.488189] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:12:08.849 [2024-11-17 14:21:57.488231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.849 [2024-11-17 14:21:57.567918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.849 [2024-11-17 14:21:57.611182] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.849 [2024-11-17 14:21:57.611217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.849 [2024-11-17 14:21:57.611225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.849 [2024-11-17 14:21:57.611231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.849 [2024-11-17 14:21:57.611236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.850 [2024-11-17 14:21:57.612708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.850 [2024-11-17 14:21:57.612818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.850 [2024-11-17 14:21:57.612923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.850 [2024-11-17 14:21:57.612923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.850 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.850 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:08.850 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:08.850 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:08.850 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:08.850 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.850 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:08.850 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode32364 00:12:08.850 [2024-11-17 14:21:57.931187] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:08.850 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:08.850 { 00:12:08.850 "nqn": "nqn.2016-06.io.spdk:cnode32364", 00:12:08.850 "tgt_name": "foobar", 00:12:08.850 "method": "nvmf_create_subsystem", 00:12:08.850 "req_id": 1 00:12:08.850 } 00:12:08.850 Got JSON-RPC error response 00:12:08.850 response: 00:12:08.850 { 00:12:08.850 "code": -32603, 00:12:08.850 "message": "Unable to find target foobar" 00:12:08.850 }' 00:12:08.850 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:08.850 { 00:12:08.850 "nqn": "nqn.2016-06.io.spdk:cnode32364", 00:12:08.850 "tgt_name": "foobar", 00:12:08.850 "method": "nvmf_create_subsystem", 00:12:08.850 "req_id": 1 00:12:08.850 } 00:12:08.850 Got JSON-RPC error response 00:12:08.850 response: 00:12:08.850 { 00:12:08.850 "code": -32603, 00:12:08.850 "message": "Unable to find target foobar" 00:12:08.850 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:08.850 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:08.850 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode26102 00:12:09.143 [2024-11-17 14:21:58.143942] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26102: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:09.143 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:09.143 { 00:12:09.143 "nqn": "nqn.2016-06.io.spdk:cnode26102", 00:12:09.143 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:09.143 "method": "nvmf_create_subsystem", 00:12:09.143 "req_id": 1 00:12:09.143 } 00:12:09.143 Got JSON-RPC error response 00:12:09.143 response: 00:12:09.143 { 00:12:09.143 "code": -32602, 00:12:09.143 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:09.143 }' 00:12:09.143 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:09.143 { 00:12:09.143 "nqn": "nqn.2016-06.io.spdk:cnode26102", 00:12:09.143 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:09.143 "method": "nvmf_create_subsystem", 00:12:09.143 "req_id": 1 00:12:09.143 } 00:12:09.143 Got JSON-RPC error response 00:12:09.143 response: 00:12:09.143 { 00:12:09.143 "code": -32602, 00:12:09.143 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:09.143 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:09.143 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:09.143 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18484 00:12:09.489 [2024-11-17 14:21:58.360653] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18484: invalid model number 'SPDK_Controller' 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:09.489 { 00:12:09.489 "nqn": "nqn.2016-06.io.spdk:cnode18484", 00:12:09.489 "model_number": "SPDK_Controller\u001f", 00:12:09.489 "method": "nvmf_create_subsystem", 00:12:09.489 "req_id": 1 00:12:09.489 } 00:12:09.489 Got JSON-RPC error response 00:12:09.489 response: 00:12:09.489 { 00:12:09.489 "code": -32602, 00:12:09.489 "message": "Invalid MN SPDK_Controller\u001f" 00:12:09.489 }' 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:09.489 { 00:12:09.489 "nqn": "nqn.2016-06.io.spdk:cnode18484", 00:12:09.489 "model_number": "SPDK_Controller\u001f", 00:12:09.489 "method": "nvmf_create_subsystem", 00:12:09.489 "req_id": 1 00:12:09.489 } 00:12:09.489 Got JSON-RPC error response 00:12:09.489 response: 00:12:09.489 { 00:12:09.489 "code": -32602, 00:12:09.489 "message": "Invalid MN SPDK_Controller\u001f" 00:12:09.489 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:09.489 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ B == \- ]] 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'B?x+l)X0:(eYy'\''sW!4gN' 00:12:09.490 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'B?x+l)X0:(eYy'\''sW!4gN' nqn.2016-06.io.spdk:cnode15958 00:12:09.801 [2024-11-17 14:21:58.709839] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15958: invalid serial number 'B?x+l)X0:(eYy'sW!4gN' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:09.801 { 00:12:09.801 "nqn": "nqn.2016-06.io.spdk:cnode15958", 00:12:09.801 "serial_number": "B?x+l)X0:(eYy'\''sW\u007f!4gN", 00:12:09.801 "method": "nvmf_create_subsystem", 00:12:09.801 "req_id": 1 00:12:09.801 } 00:12:09.801 Got JSON-RPC error response 00:12:09.801 response: 00:12:09.801 { 00:12:09.801 "code": -32602, 00:12:09.801 "message": "Invalid SN B?x+l)X0:(eYy'\''sW\u007f!4gN" 00:12:09.801 }' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:09.801 { 00:12:09.801 "nqn": "nqn.2016-06.io.spdk:cnode15958", 00:12:09.801 "serial_number": "B?x+l)X0:(eYy'sW\u007f!4gN", 00:12:09.801 "method": "nvmf_create_subsystem", 00:12:09.801 "req_id": 1 00:12:09.801 } 00:12:09.801 Got JSON-RPC error response 00:12:09.801 response: 00:12:09.801 { 00:12:09.801 "code": -32602, 00:12:09.801 "message": "Invalid SN B?x+l)X0:(eYy'sW\u007f!4gN" 00:12:09.801 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.801 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:09.802 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:09.803 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:09.803 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.803 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.803 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:09.803 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:09.803 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:09.803 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.803 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.803 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:09.803 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:10.122 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:10.122 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.122 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.122 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:10.122 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:10.122 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:10.122 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.122 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.122 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 6 == \- ]] 00:12:10.122 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '6m) 2LuW60`@"EkTV]v:SX$by?0o0OYqU";.|A7bD' 00:12:10.122 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '6m) 2LuW60`@"EkTV]v:SX$by?0o0OYqU";.|A7bD' nqn.2016-06.io.spdk:cnode9812 00:12:10.122 [2024-11-17 14:21:59.183377] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9812: invalid model number '6m) 2LuW60`@"EkTV]v:SX$by?0o0OYqU";.|A7bD' 00:12:10.122 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:10.122 { 00:12:10.122 "nqn": "nqn.2016-06.io.spdk:cnode9812", 00:12:10.122 "model_number": "6m) 2LuW60`@\"EkTV]v:SX$by?0o0OYqU\";.|A7bD", 00:12:10.122 "method": "nvmf_create_subsystem", 00:12:10.122 "req_id": 1 00:12:10.122 } 00:12:10.122 Got JSON-RPC error response 00:12:10.122 response: 00:12:10.122 { 00:12:10.122 "code": -32602, 00:12:10.122 "message": "Invalid MN 6m) 2LuW60`@\"EkTV]v:SX$by?0o0OYqU\";.|A7bD" 00:12:10.122 }' 00:12:10.122 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:10.122 { 00:12:10.122 "nqn": "nqn.2016-06.io.spdk:cnode9812", 00:12:10.122 "model_number": "6m) 2LuW60`@\"EkTV]v:SX$by?0o0OYqU\";.|A7bD", 00:12:10.122 "method": "nvmf_create_subsystem", 00:12:10.122 "req_id": 1 00:12:10.122 } 00:12:10.122 Got JSON-RPC error response 00:12:10.122 response: 00:12:10.122 { 00:12:10.122 "code": -32602, 00:12:10.122 "message": "Invalid MN 6m) 2LuW60`@\"EkTV]v:SX$by?0o0OYqU\";.|A7bD" 00:12:10.122 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:10.122 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:10.419 [2024-11-17 14:21:59.388121] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.419 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:10.419 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:10.419 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:10.419 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:10.419 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:10.419 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:10.678 [2024-11-17 14:21:59.817529] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:10.678 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:10.678 { 00:12:10.678 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:10.678 "listen_address": { 00:12:10.678 "trtype": "tcp", 00:12:10.678 "traddr": "", 00:12:10.678 "trsvcid": "4421" 00:12:10.678 }, 00:12:10.678 "method": "nvmf_subsystem_remove_listener", 00:12:10.678 "req_id": 1 00:12:10.678 } 00:12:10.678 Got JSON-RPC error response 00:12:10.678 response: 00:12:10.678 { 00:12:10.678 "code": -32602, 00:12:10.678 "message": "Invalid parameters" 00:12:10.678 }' 00:12:10.678 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:10.678 { 00:12:10.678 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:10.678 "listen_address": { 00:12:10.678 "trtype": "tcp", 00:12:10.678 "traddr": "", 00:12:10.678 "trsvcid": "4421" 00:12:10.678 }, 00:12:10.678 "method": "nvmf_subsystem_remove_listener", 00:12:10.678 "req_id": 1 00:12:10.678 } 00:12:10.678 Got JSON-RPC error response 00:12:10.678 response: 00:12:10.678 { 00:12:10.678 "code": -32602, 00:12:10.678 "message": "Invalid parameters" 00:12:10.678 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:10.678 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9985 -i 0 00:12:10.937 [2024-11-17 14:22:00.022139] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9985: invalid cntlid range [0-65519] 00:12:10.937 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:10.937 { 00:12:10.937 "nqn": "nqn.2016-06.io.spdk:cnode9985", 00:12:10.937 "min_cntlid": 0, 00:12:10.937 "method": "nvmf_create_subsystem", 00:12:10.937 "req_id": 1 00:12:10.937 } 00:12:10.937 Got JSON-RPC error response 00:12:10.937 response: 00:12:10.937 { 00:12:10.937 "code": -32602, 00:12:10.937 "message": "Invalid cntlid range [0-65519]" 00:12:10.937 }' 00:12:10.937 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:10.937 { 00:12:10.937 "nqn": "nqn.2016-06.io.spdk:cnode9985", 00:12:10.937 "min_cntlid": 0, 00:12:10.937 "method": "nvmf_create_subsystem", 00:12:10.937 "req_id": 1 00:12:10.937 } 00:12:10.937 Got JSON-RPC error response 00:12:10.937 response: 00:12:10.937 { 00:12:10.937 "code": -32602, 00:12:10.937 "message": "Invalid cntlid range [0-65519]" 00:12:10.937 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:10.937 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13699 -i 65520 00:12:11.196 [2024-11-17 14:22:00.230861] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13699: invalid cntlid range [65520-65519] 00:12:11.196 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:11.196 { 00:12:11.196 "nqn": "nqn.2016-06.io.spdk:cnode13699", 00:12:11.196 "min_cntlid": 65520, 00:12:11.196 "method": "nvmf_create_subsystem", 00:12:11.196 "req_id": 1 00:12:11.196 } 00:12:11.196 Got JSON-RPC error response 00:12:11.196 response: 00:12:11.196 { 00:12:11.196 "code": -32602, 00:12:11.196 "message": "Invalid cntlid range [65520-65519]" 00:12:11.196 }' 00:12:11.196 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:11.196 { 00:12:11.196 "nqn": "nqn.2016-06.io.spdk:cnode13699", 00:12:11.196 "min_cntlid": 65520, 00:12:11.196 "method": "nvmf_create_subsystem", 00:12:11.196 "req_id": 1 00:12:11.196 } 00:12:11.196 Got JSON-RPC error response 00:12:11.196 response: 00:12:11.196 { 00:12:11.196 "code": -32602, 00:12:11.196 "message": "Invalid cntlid range [65520-65519]" 00:12:11.196 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:11.196 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7432 -I 0 00:12:11.456 [2024-11-17 14:22:00.435526] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7432: invalid cntlid range [1-0] 00:12:11.456 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:11.456 { 00:12:11.456 "nqn": "nqn.2016-06.io.spdk:cnode7432", 00:12:11.456 "max_cntlid": 0, 00:12:11.456 "method": "nvmf_create_subsystem", 00:12:11.456 "req_id": 1 00:12:11.456 } 00:12:11.456 Got JSON-RPC error response 00:12:11.456 response: 00:12:11.456 { 00:12:11.456 "code": -32602, 00:12:11.456 "message": "Invalid cntlid range [1-0]" 00:12:11.456 }' 00:12:11.456 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:11.456 { 00:12:11.456 "nqn": "nqn.2016-06.io.spdk:cnode7432", 00:12:11.456 "max_cntlid": 0, 00:12:11.456 "method": "nvmf_create_subsystem", 00:12:11.456 "req_id": 1 00:12:11.456 } 00:12:11.456 Got JSON-RPC error response 00:12:11.456 response: 00:12:11.456 { 00:12:11.456 "code": -32602, 00:12:11.456 "message": "Invalid cntlid range [1-0]" 00:12:11.456 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:11.456 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13207 -I 65520 00:12:11.456 [2024-11-17 14:22:00.632219] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13207: invalid cntlid range [1-65520] 00:12:11.456 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:11.456 { 00:12:11.456 "nqn": "nqn.2016-06.io.spdk:cnode13207", 00:12:11.456 "max_cntlid": 65520, 00:12:11.456 "method": "nvmf_create_subsystem", 00:12:11.456 "req_id": 1 00:12:11.456 } 00:12:11.456 Got JSON-RPC error response 00:12:11.456 response: 00:12:11.456 { 00:12:11.456 "code": -32602, 00:12:11.456 "message": "Invalid cntlid range [1-65520]" 00:12:11.456 }' 00:12:11.456 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:11.456 { 00:12:11.456 "nqn": "nqn.2016-06.io.spdk:cnode13207", 00:12:11.456 "max_cntlid": 65520, 00:12:11.456 "method": "nvmf_create_subsystem", 00:12:11.456 "req_id": 1 00:12:11.456 } 00:12:11.456 Got JSON-RPC error response 00:12:11.456 response: 00:12:11.456 { 00:12:11.456 "code": -32602, 00:12:11.456 "message": "Invalid cntlid range [1-65520]" 00:12:11.456 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:11.456 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30486 -i 6 -I 5 00:12:11.715 [2024-11-17 14:22:00.840948] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30486: invalid cntlid range [6-5] 00:12:11.715 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:11.715 { 00:12:11.715 "nqn": "nqn.2016-06.io.spdk:cnode30486", 00:12:11.715 "min_cntlid": 6, 00:12:11.715 "max_cntlid": 5, 00:12:11.715 "method": "nvmf_create_subsystem", 00:12:11.715 "req_id": 1 00:12:11.715 } 00:12:11.715 Got JSON-RPC error response 00:12:11.715 response: 00:12:11.715 { 00:12:11.715 "code": -32602, 00:12:11.715 "message": "Invalid cntlid range [6-5]" 00:12:11.715 }' 00:12:11.715 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:11.715 { 00:12:11.715 "nqn": "nqn.2016-06.io.spdk:cnode30486", 00:12:11.715 "min_cntlid": 6, 00:12:11.715 "max_cntlid": 5, 00:12:11.715 "method": "nvmf_create_subsystem", 00:12:11.715 "req_id": 1 00:12:11.715 } 00:12:11.715 Got JSON-RPC error response 00:12:11.715 response: 00:12:11.715 { 00:12:11.715 "code": -32602, 00:12:11.715 "message": "Invalid cntlid range [6-5]" 00:12:11.715 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:11.715 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:11.975 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:11.975 { 00:12:11.975 "name": "foobar", 00:12:11.975 "method": "nvmf_delete_target", 00:12:11.975 "req_id": 1 00:12:11.975 } 00:12:11.975 Got JSON-RPC error response 00:12:11.975 response: 00:12:11.975 { 00:12:11.975 "code": -32602, 00:12:11.975 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:11.975 }' 00:12:11.975 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:11.975 { 00:12:11.975 "name": "foobar", 00:12:11.975 "method": "nvmf_delete_target", 00:12:11.975 "req_id": 1 00:12:11.976 } 00:12:11.976 Got JSON-RPC error response 00:12:11.976 response: 00:12:11.976 { 00:12:11.976 "code": -32602, 00:12:11.976 "message": "The specified target doesn't exist, cannot delete it." 00:12:11.976 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:11.976 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:11.976 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:11.976 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:11.976 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:11.976 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:11.976 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:11.976 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:11.976 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:11.976 rmmod nvme_tcp 00:12:11.976 rmmod nvme_fabrics 00:12:11.976 rmmod nvme_keyring 00:12:11.976 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:11.976 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:11.976 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:11.976 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1402203 ']' 00:12:11.976 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1402203 00:12:11.976 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1402203 ']' 00:12:11.976 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1402203 00:12:11.976 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:11.976 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:11.976 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1402203 00:12:11.976 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:11.976 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:11.976 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1402203' 00:12:11.976 killing process with pid 1402203 00:12:11.976 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1402203 00:12:11.976 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1402203 00:12:12.235 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:12.235 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:12.235 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:12.235 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:12.235 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:12.235 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:12.235 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:12.235 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:12.235 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:12.235 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.235 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.235 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.143 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:14.143 00:12:14.143 real 0m12.082s 00:12:14.143 user 0m18.828s 00:12:14.143 sys 0m5.496s 00:12:14.143 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.143 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:14.143 ************************************ 00:12:14.143 END TEST nvmf_invalid 00:12:14.143 ************************************ 00:12:14.403 14:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:14.403 14:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:14.403 14:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.403 14:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:14.403 ************************************ 00:12:14.403 START TEST nvmf_connect_stress 00:12:14.403 ************************************ 00:12:14.403 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:14.403 * Looking for test storage... 00:12:14.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:14.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.404 --rc genhtml_branch_coverage=1 00:12:14.404 --rc genhtml_function_coverage=1 00:12:14.404 --rc genhtml_legend=1 00:12:14.404 --rc geninfo_all_blocks=1 00:12:14.404 --rc geninfo_unexecuted_blocks=1 00:12:14.404 00:12:14.404 ' 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:14.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.404 --rc genhtml_branch_coverage=1 00:12:14.404 --rc genhtml_function_coverage=1 00:12:14.404 --rc genhtml_legend=1 00:12:14.404 --rc geninfo_all_blocks=1 00:12:14.404 --rc geninfo_unexecuted_blocks=1 00:12:14.404 00:12:14.404 ' 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:14.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.404 --rc genhtml_branch_coverage=1 00:12:14.404 --rc genhtml_function_coverage=1 00:12:14.404 --rc genhtml_legend=1 00:12:14.404 --rc geninfo_all_blocks=1 00:12:14.404 --rc geninfo_unexecuted_blocks=1 00:12:14.404 00:12:14.404 ' 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:14.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.404 --rc genhtml_branch_coverage=1 00:12:14.404 --rc genhtml_function_coverage=1 00:12:14.404 --rc genhtml_legend=1 00:12:14.404 --rc geninfo_all_blocks=1 00:12:14.404 --rc geninfo_unexecuted_blocks=1 00:12:14.404 00:12:14.404 ' 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:14.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:14.404 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:14.664 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:14.664 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:14.664 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.664 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:14.664 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:14.664 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:14.664 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.664 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.664 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.664 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:14.664 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:14.664 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:14.664 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.241 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.241 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:21.241 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:21.241 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:21.241 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:21.241 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:21.241 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:21.241 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:21.241 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:21.241 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:21.241 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:21.241 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:21.241 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:21.241 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:21.241 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:21.241 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.241 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:21.242 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:21.242 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:21.242 Found net devices under 0000:86:00.0: cvl_0_0 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:21.242 Found net devices under 0000:86:00.1: cvl_0_1 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:21.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:12:21.242 00:12:21.242 --- 10.0.0.2 ping statistics --- 00:12:21.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.242 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:12:21.242 00:12:21.242 --- 10.0.0.1 ping statistics --- 00:12:21.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.242 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:21.242 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1406586 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1406586 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1406586 ']' 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.243 [2024-11-17 14:22:09.653417] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:12:21.243 [2024-11-17 14:22:09.653462] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.243 [2024-11-17 14:22:09.733335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:21.243 [2024-11-17 14:22:09.774785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.243 [2024-11-17 14:22:09.774822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.243 [2024-11-17 14:22:09.774829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.243 [2024-11-17 14:22:09.774836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.243 [2024-11-17 14:22:09.774841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.243 [2024-11-17 14:22:09.776285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.243 [2024-11-17 14:22:09.776392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.243 [2024-11-17 14:22:09.776392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.243 [2024-11-17 14:22:09.912185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.243 [2024-11-17 14:22:09.932418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.243 NULL1 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1406626 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.243 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.243 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.503 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.503 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:21.503 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.503 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.503 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.070 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.070 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:22.070 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.070 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.070 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.329 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.329 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:22.329 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.329 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.329 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.588 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.588 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:22.588 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.588 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.588 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.848 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.848 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:22.848 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.848 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.848 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.107 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.107 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:23.107 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.107 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.107 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.675 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.675 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:23.675 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.675 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.675 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.934 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.934 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:23.934 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.934 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.934 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.194 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.194 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:24.194 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.194 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.194 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.453 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.453 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:24.453 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.453 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.453 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.022 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.022 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:25.022 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.022 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.022 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.281 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.281 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:25.281 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.281 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.281 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.540 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.540 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:25.540 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.540 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.540 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.800 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.800 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:25.800 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.800 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.800 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.059 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.059 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:26.059 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.059 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.059 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.627 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.627 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:26.627 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.627 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.627 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.886 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.886 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:26.886 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.886 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.886 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.145 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.145 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:27.145 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.145 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.145 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.404 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.404 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:27.404 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.404 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.404 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.663 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.663 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:27.663 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.663 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.663 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.231 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.232 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:28.232 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.232 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.232 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.491 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.491 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:28.491 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.491 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.491 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.750 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.750 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:28.750 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.750 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.750 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.009 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.009 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:29.009 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.009 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.009 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.577 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.577 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:29.577 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.577 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.577 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.836 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.836 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:29.836 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.836 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.836 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.103 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.103 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:30.104 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.104 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.104 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.366 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.366 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:30.366 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.367 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.367 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.625 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.625 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:30.625 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.625 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.625 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.193 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.193 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:31.193 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.193 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.193 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.193 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1406626 00:12:31.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1406626) - No such process 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1406626 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:31.452 rmmod nvme_tcp 00:12:31.452 rmmod nvme_fabrics 00:12:31.452 rmmod nvme_keyring 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1406586 ']' 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1406586 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1406586 ']' 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1406586 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1406586 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1406586' 00:12:31.452 killing process with pid 1406586 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1406586 00:12:31.452 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1406586 00:12:31.712 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:31.712 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:31.712 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:31.712 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:31.712 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:31.712 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:31.712 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:31.712 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:31.712 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:31.712 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.712 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.712 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.619 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:33.619 00:12:33.619 real 0m19.397s 00:12:33.619 user 0m40.597s 00:12:33.619 sys 0m8.517s 00:12:33.619 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.619 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.619 ************************************ 00:12:33.619 END TEST nvmf_connect_stress 00:12:33.619 ************************************ 00:12:33.879 14:22:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:33.879 14:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:33.879 14:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.879 14:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:33.879 ************************************ 00:12:33.879 START TEST nvmf_fused_ordering 00:12:33.879 ************************************ 00:12:33.879 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:33.879 * Looking for test storage... 00:12:33.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.879 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:33.879 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:33.879 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:12:33.879 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:33.879 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:33.879 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:33.879 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:33.879 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:33.879 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:33.879 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:33.879 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:33.879 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:33.879 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:33.879 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:33.879 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:33.879 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:33.879 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:33.879 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:33.879 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:33.879 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:33.879 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:33.879 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:33.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.880 --rc genhtml_branch_coverage=1 00:12:33.880 --rc genhtml_function_coverage=1 00:12:33.880 --rc genhtml_legend=1 00:12:33.880 --rc geninfo_all_blocks=1 00:12:33.880 --rc geninfo_unexecuted_blocks=1 00:12:33.880 00:12:33.880 ' 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:33.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.880 --rc genhtml_branch_coverage=1 00:12:33.880 --rc genhtml_function_coverage=1 00:12:33.880 --rc genhtml_legend=1 00:12:33.880 --rc geninfo_all_blocks=1 00:12:33.880 --rc geninfo_unexecuted_blocks=1 00:12:33.880 00:12:33.880 ' 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:33.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.880 --rc genhtml_branch_coverage=1 00:12:33.880 --rc genhtml_function_coverage=1 00:12:33.880 --rc genhtml_legend=1 00:12:33.880 --rc geninfo_all_blocks=1 00:12:33.880 --rc geninfo_unexecuted_blocks=1 00:12:33.880 00:12:33.880 ' 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:33.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.880 --rc genhtml_branch_coverage=1 00:12:33.880 --rc genhtml_function_coverage=1 00:12:33.880 --rc genhtml_legend=1 00:12:33.880 --rc geninfo_all_blocks=1 00:12:33.880 --rc geninfo_unexecuted_blocks=1 00:12:33.880 00:12:33.880 ' 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:33.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:33.880 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:34.140 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:34.140 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:34.140 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.140 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:34.140 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:34.140 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:34.140 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.140 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.140 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.140 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:34.140 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:34.140 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:34.140 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:40.714 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:40.715 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:40.715 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:40.715 Found net devices under 0000:86:00.0: cvl_0_0 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:40.715 Found net devices under 0000:86:00.1: cvl_0_1 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:40.715 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:40.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:12:40.715 00:12:40.715 --- 10.0.0.2 ping statistics --- 00:12:40.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.715 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:40.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:12:40.715 00:12:40.715 --- 10.0.0.1 ping statistics --- 00:12:40.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.715 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1411833 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1411833 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1411833 ']' 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.715 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:40.715 [2024-11-17 14:22:29.119127] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:12:40.715 [2024-11-17 14:22:29.119179] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.715 [2024-11-17 14:22:29.199612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.715 [2024-11-17 14:22:29.240613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.715 [2024-11-17 14:22:29.240655] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.715 [2024-11-17 14:22:29.240662] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.716 [2024-11-17 14:22:29.240668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.716 [2024-11-17 14:22:29.240673] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.716 [2024-11-17 14:22:29.241239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:40.716 [2024-11-17 14:22:29.376808] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:40.716 [2024-11-17 14:22:29.397002] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:40.716 NULL1 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.716 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:40.716 [2024-11-17 14:22:29.456920] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:12:40.716 [2024-11-17 14:22:29.456964] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412024 ] 00:12:40.716 Attached to nqn.2016-06.io.spdk:cnode1 00:12:40.716 Namespace ID: 1 size: 1GB 00:12:40.716 fused_ordering(0) 00:12:40.716 fused_ordering(1) 00:12:40.716 fused_ordering(2) 00:12:40.716 fused_ordering(3) 00:12:40.716 fused_ordering(4) 00:12:40.716 fused_ordering(5) 00:12:40.716 fused_ordering(6) 00:12:40.716 fused_ordering(7) 00:12:40.716 fused_ordering(8) 00:12:40.716 fused_ordering(9) 00:12:40.716 fused_ordering(10) 00:12:40.716 fused_ordering(11) 00:12:40.716 fused_ordering(12) 00:12:40.716 fused_ordering(13) 00:12:40.716 fused_ordering(14) 00:12:40.716 fused_ordering(15) 00:12:40.716 fused_ordering(16) 00:12:40.716 fused_ordering(17) 00:12:40.716 fused_ordering(18) 00:12:40.716 fused_ordering(19) 00:12:40.716 fused_ordering(20) 00:12:40.716 fused_ordering(21) 00:12:40.716 fused_ordering(22) 00:12:40.716 fused_ordering(23) 00:12:40.716 fused_ordering(24) 00:12:40.716 fused_ordering(25) 00:12:40.716 fused_ordering(26) 00:12:40.716 fused_ordering(27) 00:12:40.716 fused_ordering(28) 00:12:40.716 fused_ordering(29) 00:12:40.716 fused_ordering(30) 00:12:40.716 fused_ordering(31) 00:12:40.716 fused_ordering(32) 00:12:40.716 fused_ordering(33) 00:12:40.716 fused_ordering(34) 00:12:40.716 fused_ordering(35) 00:12:40.716 fused_ordering(36) 00:12:40.716 fused_ordering(37) 00:12:40.716 fused_ordering(38) 00:12:40.716 fused_ordering(39) 00:12:40.716 fused_ordering(40) 00:12:40.716 fused_ordering(41) 00:12:40.716 fused_ordering(42) 00:12:40.716 fused_ordering(43) 00:12:40.716 fused_ordering(44) 00:12:40.716 fused_ordering(45) 00:12:40.716 fused_ordering(46) 00:12:40.716 fused_ordering(47) 00:12:40.716 fused_ordering(48) 00:12:40.716 fused_ordering(49) 00:12:40.716 fused_ordering(50) 00:12:40.716 fused_ordering(51) 00:12:40.716 fused_ordering(52) 00:12:40.716 fused_ordering(53) 00:12:40.716 fused_ordering(54) 00:12:40.716 fused_ordering(55) 00:12:40.716 fused_ordering(56) 00:12:40.716 fused_ordering(57) 00:12:40.716 fused_ordering(58) 00:12:40.716 fused_ordering(59) 00:12:40.716 fused_ordering(60) 00:12:40.716 fused_ordering(61) 00:12:40.716 fused_ordering(62) 00:12:40.716 fused_ordering(63) 00:12:40.716 fused_ordering(64) 00:12:40.716 fused_ordering(65) 00:12:40.716 fused_ordering(66) 00:12:40.716 fused_ordering(67) 00:12:40.716 fused_ordering(68) 00:12:40.716 fused_ordering(69) 00:12:40.716 fused_ordering(70) 00:12:40.716 fused_ordering(71) 00:12:40.716 fused_ordering(72) 00:12:40.716 fused_ordering(73) 00:12:40.716 fused_ordering(74) 00:12:40.716 fused_ordering(75) 00:12:40.716 fused_ordering(76) 00:12:40.716 fused_ordering(77) 00:12:40.716 fused_ordering(78) 00:12:40.716 fused_ordering(79) 00:12:40.716 fused_ordering(80) 00:12:40.716 fused_ordering(81) 00:12:40.716 fused_ordering(82) 00:12:40.716 fused_ordering(83) 00:12:40.716 fused_ordering(84) 00:12:40.716 fused_ordering(85) 00:12:40.716 fused_ordering(86) 00:12:40.716 fused_ordering(87) 00:12:40.716 fused_ordering(88) 00:12:40.716 fused_ordering(89) 00:12:40.716 fused_ordering(90) 00:12:40.716 fused_ordering(91) 00:12:40.716 fused_ordering(92) 00:12:40.716 fused_ordering(93) 00:12:40.716 fused_ordering(94) 00:12:40.716 fused_ordering(95) 00:12:40.716 fused_ordering(96) 00:12:40.716 fused_ordering(97) 00:12:40.716 fused_ordering(98) 00:12:40.716 fused_ordering(99) 00:12:40.716 fused_ordering(100) 00:12:40.716 fused_ordering(101) 00:12:40.716 fused_ordering(102) 00:12:40.716 fused_ordering(103) 00:12:40.716 fused_ordering(104) 00:12:40.716 fused_ordering(105) 00:12:40.716 fused_ordering(106) 00:12:40.716 fused_ordering(107) 00:12:40.716 fused_ordering(108) 00:12:40.716 fused_ordering(109) 00:12:40.716 fused_ordering(110) 00:12:40.716 fused_ordering(111) 00:12:40.716 fused_ordering(112) 00:12:40.716 fused_ordering(113) 00:12:40.716 fused_ordering(114) 00:12:40.716 fused_ordering(115) 00:12:40.716 fused_ordering(116) 00:12:40.716 fused_ordering(117) 00:12:40.716 fused_ordering(118) 00:12:40.716 fused_ordering(119) 00:12:40.716 fused_ordering(120) 00:12:40.716 fused_ordering(121) 00:12:40.716 fused_ordering(122) 00:12:40.716 fused_ordering(123) 00:12:40.716 fused_ordering(124) 00:12:40.716 fused_ordering(125) 00:12:40.716 fused_ordering(126) 00:12:40.716 fused_ordering(127) 00:12:40.716 fused_ordering(128) 00:12:40.716 fused_ordering(129) 00:12:40.716 fused_ordering(130) 00:12:40.716 fused_ordering(131) 00:12:40.716 fused_ordering(132) 00:12:40.716 fused_ordering(133) 00:12:40.716 fused_ordering(134) 00:12:40.716 fused_ordering(135) 00:12:40.716 fused_ordering(136) 00:12:40.716 fused_ordering(137) 00:12:40.716 fused_ordering(138) 00:12:40.716 fused_ordering(139) 00:12:40.716 fused_ordering(140) 00:12:40.716 fused_ordering(141) 00:12:40.716 fused_ordering(142) 00:12:40.716 fused_ordering(143) 00:12:40.716 fused_ordering(144) 00:12:40.716 fused_ordering(145) 00:12:40.716 fused_ordering(146) 00:12:40.716 fused_ordering(147) 00:12:40.716 fused_ordering(148) 00:12:40.716 fused_ordering(149) 00:12:40.716 fused_ordering(150) 00:12:40.716 fused_ordering(151) 00:12:40.716 fused_ordering(152) 00:12:40.716 fused_ordering(153) 00:12:40.716 fused_ordering(154) 00:12:40.716 fused_ordering(155) 00:12:40.716 fused_ordering(156) 00:12:40.716 fused_ordering(157) 00:12:40.716 fused_ordering(158) 00:12:40.716 fused_ordering(159) 00:12:40.716 fused_ordering(160) 00:12:40.716 fused_ordering(161) 00:12:40.717 fused_ordering(162) 00:12:40.717 fused_ordering(163) 00:12:40.717 fused_ordering(164) 00:12:40.717 fused_ordering(165) 00:12:40.717 fused_ordering(166) 00:12:40.717 fused_ordering(167) 00:12:40.717 fused_ordering(168) 00:12:40.717 fused_ordering(169) 00:12:40.717 fused_ordering(170) 00:12:40.717 fused_ordering(171) 00:12:40.717 fused_ordering(172) 00:12:40.717 fused_ordering(173) 00:12:40.717 fused_ordering(174) 00:12:40.717 fused_ordering(175) 00:12:40.717 fused_ordering(176) 00:12:40.717 fused_ordering(177) 00:12:40.717 fused_ordering(178) 00:12:40.717 fused_ordering(179) 00:12:40.717 fused_ordering(180) 00:12:40.717 fused_ordering(181) 00:12:40.717 fused_ordering(182) 00:12:40.717 fused_ordering(183) 00:12:40.717 fused_ordering(184) 00:12:40.717 fused_ordering(185) 00:12:40.717 fused_ordering(186) 00:12:40.717 fused_ordering(187) 00:12:40.717 fused_ordering(188) 00:12:40.717 fused_ordering(189) 00:12:40.717 fused_ordering(190) 00:12:40.717 fused_ordering(191) 00:12:40.717 fused_ordering(192) 00:12:40.717 fused_ordering(193) 00:12:40.717 fused_ordering(194) 00:12:40.717 fused_ordering(195) 00:12:40.717 fused_ordering(196) 00:12:40.717 fused_ordering(197) 00:12:40.717 fused_ordering(198) 00:12:40.717 fused_ordering(199) 00:12:40.717 fused_ordering(200) 00:12:40.717 fused_ordering(201) 00:12:40.717 fused_ordering(202) 00:12:40.717 fused_ordering(203) 00:12:40.717 fused_ordering(204) 00:12:40.717 fused_ordering(205) 00:12:40.976 fused_ordering(206) 00:12:40.976 fused_ordering(207) 00:12:40.976 fused_ordering(208) 00:12:40.976 fused_ordering(209) 00:12:40.976 fused_ordering(210) 00:12:40.976 fused_ordering(211) 00:12:40.976 fused_ordering(212) 00:12:40.976 fused_ordering(213) 00:12:40.976 fused_ordering(214) 00:12:40.976 fused_ordering(215) 00:12:40.976 fused_ordering(216) 00:12:40.976 fused_ordering(217) 00:12:40.976 fused_ordering(218) 00:12:40.976 fused_ordering(219) 00:12:40.976 fused_ordering(220) 00:12:40.976 fused_ordering(221) 00:12:40.976 fused_ordering(222) 00:12:40.976 fused_ordering(223) 00:12:40.976 fused_ordering(224) 00:12:40.976 fused_ordering(225) 00:12:40.976 fused_ordering(226) 00:12:40.976 fused_ordering(227) 00:12:40.976 fused_ordering(228) 00:12:40.976 fused_ordering(229) 00:12:40.976 fused_ordering(230) 00:12:40.976 fused_ordering(231) 00:12:40.976 fused_ordering(232) 00:12:40.976 fused_ordering(233) 00:12:40.976 fused_ordering(234) 00:12:40.976 fused_ordering(235) 00:12:40.976 fused_ordering(236) 00:12:40.976 fused_ordering(237) 00:12:40.976 fused_ordering(238) 00:12:40.976 fused_ordering(239) 00:12:40.976 fused_ordering(240) 00:12:40.976 fused_ordering(241) 00:12:40.976 fused_ordering(242) 00:12:40.976 fused_ordering(243) 00:12:40.976 fused_ordering(244) 00:12:40.976 fused_ordering(245) 00:12:40.976 fused_ordering(246) 00:12:40.976 fused_ordering(247) 00:12:40.976 fused_ordering(248) 00:12:40.976 fused_ordering(249) 00:12:40.976 fused_ordering(250) 00:12:40.976 fused_ordering(251) 00:12:40.976 fused_ordering(252) 00:12:40.976 fused_ordering(253) 00:12:40.976 fused_ordering(254) 00:12:40.976 fused_ordering(255) 00:12:40.976 fused_ordering(256) 00:12:40.976 fused_ordering(257) 00:12:40.976 fused_ordering(258) 00:12:40.976 fused_ordering(259) 00:12:40.976 fused_ordering(260) 00:12:40.976 fused_ordering(261) 00:12:40.976 fused_ordering(262) 00:12:40.976 fused_ordering(263) 00:12:40.976 fused_ordering(264) 00:12:40.976 fused_ordering(265) 00:12:40.976 fused_ordering(266) 00:12:40.976 fused_ordering(267) 00:12:40.976 fused_ordering(268) 00:12:40.976 fused_ordering(269) 00:12:40.976 fused_ordering(270) 00:12:40.976 fused_ordering(271) 00:12:40.977 fused_ordering(272) 00:12:40.977 fused_ordering(273) 00:12:40.977 fused_ordering(274) 00:12:40.977 fused_ordering(275) 00:12:40.977 fused_ordering(276) 00:12:40.977 fused_ordering(277) 00:12:40.977 fused_ordering(278) 00:12:40.977 fused_ordering(279) 00:12:40.977 fused_ordering(280) 00:12:40.977 fused_ordering(281) 00:12:40.977 fused_ordering(282) 00:12:40.977 fused_ordering(283) 00:12:40.977 fused_ordering(284) 00:12:40.977 fused_ordering(285) 00:12:40.977 fused_ordering(286) 00:12:40.977 fused_ordering(287) 00:12:40.977 fused_ordering(288) 00:12:40.977 fused_ordering(289) 00:12:40.977 fused_ordering(290) 00:12:40.977 fused_ordering(291) 00:12:40.977 fused_ordering(292) 00:12:40.977 fused_ordering(293) 00:12:40.977 fused_ordering(294) 00:12:40.977 fused_ordering(295) 00:12:40.977 fused_ordering(296) 00:12:40.977 fused_ordering(297) 00:12:40.977 fused_ordering(298) 00:12:40.977 fused_ordering(299) 00:12:40.977 fused_ordering(300) 00:12:40.977 fused_ordering(301) 00:12:40.977 fused_ordering(302) 00:12:40.977 fused_ordering(303) 00:12:40.977 fused_ordering(304) 00:12:40.977 fused_ordering(305) 00:12:40.977 fused_ordering(306) 00:12:40.977 fused_ordering(307) 00:12:40.977 fused_ordering(308) 00:12:40.977 fused_ordering(309) 00:12:40.977 fused_ordering(310) 00:12:40.977 fused_ordering(311) 00:12:40.977 fused_ordering(312) 00:12:40.977 fused_ordering(313) 00:12:40.977 fused_ordering(314) 00:12:40.977 fused_ordering(315) 00:12:40.977 fused_ordering(316) 00:12:40.977 fused_ordering(317) 00:12:40.977 fused_ordering(318) 00:12:40.977 fused_ordering(319) 00:12:40.977 fused_ordering(320) 00:12:40.977 fused_ordering(321) 00:12:40.977 fused_ordering(322) 00:12:40.977 fused_ordering(323) 00:12:40.977 fused_ordering(324) 00:12:40.977 fused_ordering(325) 00:12:40.977 fused_ordering(326) 00:12:40.977 fused_ordering(327) 00:12:40.977 fused_ordering(328) 00:12:40.977 fused_ordering(329) 00:12:40.977 fused_ordering(330) 00:12:40.977 fused_ordering(331) 00:12:40.977 fused_ordering(332) 00:12:40.977 fused_ordering(333) 00:12:40.977 fused_ordering(334) 00:12:40.977 fused_ordering(335) 00:12:40.977 fused_ordering(336) 00:12:40.977 fused_ordering(337) 00:12:40.977 fused_ordering(338) 00:12:40.977 fused_ordering(339) 00:12:40.977 fused_ordering(340) 00:12:40.977 fused_ordering(341) 00:12:40.977 fused_ordering(342) 00:12:40.977 fused_ordering(343) 00:12:40.977 fused_ordering(344) 00:12:40.977 fused_ordering(345) 00:12:40.977 fused_ordering(346) 00:12:40.977 fused_ordering(347) 00:12:40.977 fused_ordering(348) 00:12:40.977 fused_ordering(349) 00:12:40.977 fused_ordering(350) 00:12:40.977 fused_ordering(351) 00:12:40.977 fused_ordering(352) 00:12:40.977 fused_ordering(353) 00:12:40.977 fused_ordering(354) 00:12:40.977 fused_ordering(355) 00:12:40.977 fused_ordering(356) 00:12:40.977 fused_ordering(357) 00:12:40.977 fused_ordering(358) 00:12:40.977 fused_ordering(359) 00:12:40.977 fused_ordering(360) 00:12:40.977 fused_ordering(361) 00:12:40.977 fused_ordering(362) 00:12:40.977 fused_ordering(363) 00:12:40.977 fused_ordering(364) 00:12:40.977 fused_ordering(365) 00:12:40.977 fused_ordering(366) 00:12:40.977 fused_ordering(367) 00:12:40.977 fused_ordering(368) 00:12:40.977 fused_ordering(369) 00:12:40.977 fused_ordering(370) 00:12:40.977 fused_ordering(371) 00:12:40.977 fused_ordering(372) 00:12:40.977 fused_ordering(373) 00:12:40.977 fused_ordering(374) 00:12:40.977 fused_ordering(375) 00:12:40.977 fused_ordering(376) 00:12:40.977 fused_ordering(377) 00:12:40.977 fused_ordering(378) 00:12:40.977 fused_ordering(379) 00:12:40.977 fused_ordering(380) 00:12:40.977 fused_ordering(381) 00:12:40.977 fused_ordering(382) 00:12:40.977 fused_ordering(383) 00:12:40.977 fused_ordering(384) 00:12:40.977 fused_ordering(385) 00:12:40.977 fused_ordering(386) 00:12:40.977 fused_ordering(387) 00:12:40.977 fused_ordering(388) 00:12:40.977 fused_ordering(389) 00:12:40.977 fused_ordering(390) 00:12:40.977 fused_ordering(391) 00:12:40.977 fused_ordering(392) 00:12:40.977 fused_ordering(393) 00:12:40.977 fused_ordering(394) 00:12:40.977 fused_ordering(395) 00:12:40.977 fused_ordering(396) 00:12:40.977 fused_ordering(397) 00:12:40.977 fused_ordering(398) 00:12:40.977 fused_ordering(399) 00:12:40.977 fused_ordering(400) 00:12:40.977 fused_ordering(401) 00:12:40.977 fused_ordering(402) 00:12:40.977 fused_ordering(403) 00:12:40.977 fused_ordering(404) 00:12:40.977 fused_ordering(405) 00:12:40.977 fused_ordering(406) 00:12:40.977 fused_ordering(407) 00:12:40.977 fused_ordering(408) 00:12:40.977 fused_ordering(409) 00:12:40.977 fused_ordering(410) 00:12:41.236 fused_ordering(411) 00:12:41.236 fused_ordering(412) 00:12:41.236 fused_ordering(413) 00:12:41.236 fused_ordering(414) 00:12:41.236 fused_ordering(415) 00:12:41.236 fused_ordering(416) 00:12:41.236 fused_ordering(417) 00:12:41.236 fused_ordering(418) 00:12:41.236 fused_ordering(419) 00:12:41.236 fused_ordering(420) 00:12:41.236 fused_ordering(421) 00:12:41.236 fused_ordering(422) 00:12:41.236 fused_ordering(423) 00:12:41.236 fused_ordering(424) 00:12:41.236 fused_ordering(425) 00:12:41.236 fused_ordering(426) 00:12:41.236 fused_ordering(427) 00:12:41.236 fused_ordering(428) 00:12:41.236 fused_ordering(429) 00:12:41.236 fused_ordering(430) 00:12:41.236 fused_ordering(431) 00:12:41.236 fused_ordering(432) 00:12:41.236 fused_ordering(433) 00:12:41.236 fused_ordering(434) 00:12:41.236 fused_ordering(435) 00:12:41.236 fused_ordering(436) 00:12:41.236 fused_ordering(437) 00:12:41.236 fused_ordering(438) 00:12:41.236 fused_ordering(439) 00:12:41.236 fused_ordering(440) 00:12:41.236 fused_ordering(441) 00:12:41.236 fused_ordering(442) 00:12:41.236 fused_ordering(443) 00:12:41.236 fused_ordering(444) 00:12:41.236 fused_ordering(445) 00:12:41.236 fused_ordering(446) 00:12:41.236 fused_ordering(447) 00:12:41.236 fused_ordering(448) 00:12:41.236 fused_ordering(449) 00:12:41.236 fused_ordering(450) 00:12:41.236 fused_ordering(451) 00:12:41.236 fused_ordering(452) 00:12:41.236 fused_ordering(453) 00:12:41.236 fused_ordering(454) 00:12:41.236 fused_ordering(455) 00:12:41.236 fused_ordering(456) 00:12:41.236 fused_ordering(457) 00:12:41.236 fused_ordering(458) 00:12:41.236 fused_ordering(459) 00:12:41.236 fused_ordering(460) 00:12:41.236 fused_ordering(461) 00:12:41.236 fused_ordering(462) 00:12:41.236 fused_ordering(463) 00:12:41.236 fused_ordering(464) 00:12:41.236 fused_ordering(465) 00:12:41.236 fused_ordering(466) 00:12:41.236 fused_ordering(467) 00:12:41.236 fused_ordering(468) 00:12:41.236 fused_ordering(469) 00:12:41.236 fused_ordering(470) 00:12:41.236 fused_ordering(471) 00:12:41.236 fused_ordering(472) 00:12:41.236 fused_ordering(473) 00:12:41.236 fused_ordering(474) 00:12:41.236 fused_ordering(475) 00:12:41.236 fused_ordering(476) 00:12:41.236 fused_ordering(477) 00:12:41.236 fused_ordering(478) 00:12:41.236 fused_ordering(479) 00:12:41.236 fused_ordering(480) 00:12:41.236 fused_ordering(481) 00:12:41.236 fused_ordering(482) 00:12:41.236 fused_ordering(483) 00:12:41.236 fused_ordering(484) 00:12:41.236 fused_ordering(485) 00:12:41.236 fused_ordering(486) 00:12:41.236 fused_ordering(487) 00:12:41.236 fused_ordering(488) 00:12:41.236 fused_ordering(489) 00:12:41.236 fused_ordering(490) 00:12:41.236 fused_ordering(491) 00:12:41.236 fused_ordering(492) 00:12:41.236 fused_ordering(493) 00:12:41.236 fused_ordering(494) 00:12:41.236 fused_ordering(495) 00:12:41.236 fused_ordering(496) 00:12:41.236 fused_ordering(497) 00:12:41.236 fused_ordering(498) 00:12:41.236 fused_ordering(499) 00:12:41.236 fused_ordering(500) 00:12:41.236 fused_ordering(501) 00:12:41.236 fused_ordering(502) 00:12:41.236 fused_ordering(503) 00:12:41.236 fused_ordering(504) 00:12:41.236 fused_ordering(505) 00:12:41.236 fused_ordering(506) 00:12:41.236 fused_ordering(507) 00:12:41.236 fused_ordering(508) 00:12:41.236 fused_ordering(509) 00:12:41.236 fused_ordering(510) 00:12:41.236 fused_ordering(511) 00:12:41.236 fused_ordering(512) 00:12:41.236 fused_ordering(513) 00:12:41.236 fused_ordering(514) 00:12:41.236 fused_ordering(515) 00:12:41.236 fused_ordering(516) 00:12:41.236 fused_ordering(517) 00:12:41.236 fused_ordering(518) 00:12:41.236 fused_ordering(519) 00:12:41.236 fused_ordering(520) 00:12:41.236 fused_ordering(521) 00:12:41.236 fused_ordering(522) 00:12:41.236 fused_ordering(523) 00:12:41.236 fused_ordering(524) 00:12:41.236 fused_ordering(525) 00:12:41.236 fused_ordering(526) 00:12:41.236 fused_ordering(527) 00:12:41.236 fused_ordering(528) 00:12:41.236 fused_ordering(529) 00:12:41.236 fused_ordering(530) 00:12:41.236 fused_ordering(531) 00:12:41.236 fused_ordering(532) 00:12:41.236 fused_ordering(533) 00:12:41.236 fused_ordering(534) 00:12:41.236 fused_ordering(535) 00:12:41.236 fused_ordering(536) 00:12:41.236 fused_ordering(537) 00:12:41.237 fused_ordering(538) 00:12:41.237 fused_ordering(539) 00:12:41.237 fused_ordering(540) 00:12:41.237 fused_ordering(541) 00:12:41.237 fused_ordering(542) 00:12:41.237 fused_ordering(543) 00:12:41.237 fused_ordering(544) 00:12:41.237 fused_ordering(545) 00:12:41.237 fused_ordering(546) 00:12:41.237 fused_ordering(547) 00:12:41.237 fused_ordering(548) 00:12:41.237 fused_ordering(549) 00:12:41.237 fused_ordering(550) 00:12:41.237 fused_ordering(551) 00:12:41.237 fused_ordering(552) 00:12:41.237 fused_ordering(553) 00:12:41.237 fused_ordering(554) 00:12:41.237 fused_ordering(555) 00:12:41.237 fused_ordering(556) 00:12:41.237 fused_ordering(557) 00:12:41.237 fused_ordering(558) 00:12:41.237 fused_ordering(559) 00:12:41.237 fused_ordering(560) 00:12:41.237 fused_ordering(561) 00:12:41.237 fused_ordering(562) 00:12:41.237 fused_ordering(563) 00:12:41.237 fused_ordering(564) 00:12:41.237 fused_ordering(565) 00:12:41.237 fused_ordering(566) 00:12:41.237 fused_ordering(567) 00:12:41.237 fused_ordering(568) 00:12:41.237 fused_ordering(569) 00:12:41.237 fused_ordering(570) 00:12:41.237 fused_ordering(571) 00:12:41.237 fused_ordering(572) 00:12:41.237 fused_ordering(573) 00:12:41.237 fused_ordering(574) 00:12:41.237 fused_ordering(575) 00:12:41.237 fused_ordering(576) 00:12:41.237 fused_ordering(577) 00:12:41.237 fused_ordering(578) 00:12:41.237 fused_ordering(579) 00:12:41.237 fused_ordering(580) 00:12:41.237 fused_ordering(581) 00:12:41.237 fused_ordering(582) 00:12:41.237 fused_ordering(583) 00:12:41.237 fused_ordering(584) 00:12:41.237 fused_ordering(585) 00:12:41.237 fused_ordering(586) 00:12:41.237 fused_ordering(587) 00:12:41.237 fused_ordering(588) 00:12:41.237 fused_ordering(589) 00:12:41.237 fused_ordering(590) 00:12:41.237 fused_ordering(591) 00:12:41.237 fused_ordering(592) 00:12:41.237 fused_ordering(593) 00:12:41.237 fused_ordering(594) 00:12:41.237 fused_ordering(595) 00:12:41.237 fused_ordering(596) 00:12:41.237 fused_ordering(597) 00:12:41.237 fused_ordering(598) 00:12:41.237 fused_ordering(599) 00:12:41.237 fused_ordering(600) 00:12:41.237 fused_ordering(601) 00:12:41.237 fused_ordering(602) 00:12:41.237 fused_ordering(603) 00:12:41.237 fused_ordering(604) 00:12:41.237 fused_ordering(605) 00:12:41.237 fused_ordering(606) 00:12:41.237 fused_ordering(607) 00:12:41.237 fused_ordering(608) 00:12:41.237 fused_ordering(609) 00:12:41.237 fused_ordering(610) 00:12:41.237 fused_ordering(611) 00:12:41.237 fused_ordering(612) 00:12:41.237 fused_ordering(613) 00:12:41.237 fused_ordering(614) 00:12:41.237 fused_ordering(615) 00:12:41.805 fused_ordering(616) 00:12:41.805 fused_ordering(617) 00:12:41.805 fused_ordering(618) 00:12:41.805 fused_ordering(619) 00:12:41.805 fused_ordering(620) 00:12:41.805 fused_ordering(621) 00:12:41.805 fused_ordering(622) 00:12:41.805 fused_ordering(623) 00:12:41.805 fused_ordering(624) 00:12:41.805 fused_ordering(625) 00:12:41.805 fused_ordering(626) 00:12:41.805 fused_ordering(627) 00:12:41.805 fused_ordering(628) 00:12:41.805 fused_ordering(629) 00:12:41.805 fused_ordering(630) 00:12:41.805 fused_ordering(631) 00:12:41.805 fused_ordering(632) 00:12:41.805 fused_ordering(633) 00:12:41.805 fused_ordering(634) 00:12:41.805 fused_ordering(635) 00:12:41.805 fused_ordering(636) 00:12:41.805 fused_ordering(637) 00:12:41.805 fused_ordering(638) 00:12:41.805 fused_ordering(639) 00:12:41.805 fused_ordering(640) 00:12:41.805 fused_ordering(641) 00:12:41.805 fused_ordering(642) 00:12:41.805 fused_ordering(643) 00:12:41.805 fused_ordering(644) 00:12:41.805 fused_ordering(645) 00:12:41.805 fused_ordering(646) 00:12:41.805 fused_ordering(647) 00:12:41.805 fused_ordering(648) 00:12:41.805 fused_ordering(649) 00:12:41.805 fused_ordering(650) 00:12:41.805 fused_ordering(651) 00:12:41.805 fused_ordering(652) 00:12:41.805 fused_ordering(653) 00:12:41.805 fused_ordering(654) 00:12:41.805 fused_ordering(655) 00:12:41.805 fused_ordering(656) 00:12:41.805 fused_ordering(657) 00:12:41.805 fused_ordering(658) 00:12:41.805 fused_ordering(659) 00:12:41.805 fused_ordering(660) 00:12:41.805 fused_ordering(661) 00:12:41.805 fused_ordering(662) 00:12:41.805 fused_ordering(663) 00:12:41.805 fused_ordering(664) 00:12:41.805 fused_ordering(665) 00:12:41.805 fused_ordering(666) 00:12:41.805 fused_ordering(667) 00:12:41.805 fused_ordering(668) 00:12:41.805 fused_ordering(669) 00:12:41.805 fused_ordering(670) 00:12:41.805 fused_ordering(671) 00:12:41.805 fused_ordering(672) 00:12:41.805 fused_ordering(673) 00:12:41.805 fused_ordering(674) 00:12:41.805 fused_ordering(675) 00:12:41.805 fused_ordering(676) 00:12:41.805 fused_ordering(677) 00:12:41.805 fused_ordering(678) 00:12:41.805 fused_ordering(679) 00:12:41.805 fused_ordering(680) 00:12:41.805 fused_ordering(681) 00:12:41.805 fused_ordering(682) 00:12:41.805 fused_ordering(683) 00:12:41.805 fused_ordering(684) 00:12:41.805 fused_ordering(685) 00:12:41.805 fused_ordering(686) 00:12:41.805 fused_ordering(687) 00:12:41.805 fused_ordering(688) 00:12:41.805 fused_ordering(689) 00:12:41.805 fused_ordering(690) 00:12:41.805 fused_ordering(691) 00:12:41.805 fused_ordering(692) 00:12:41.805 fused_ordering(693) 00:12:41.805 fused_ordering(694) 00:12:41.805 fused_ordering(695) 00:12:41.805 fused_ordering(696) 00:12:41.805 fused_ordering(697) 00:12:41.805 fused_ordering(698) 00:12:41.805 fused_ordering(699) 00:12:41.805 fused_ordering(700) 00:12:41.805 fused_ordering(701) 00:12:41.805 fused_ordering(702) 00:12:41.805 fused_ordering(703) 00:12:41.805 fused_ordering(704) 00:12:41.805 fused_ordering(705) 00:12:41.805 fused_ordering(706) 00:12:41.805 fused_ordering(707) 00:12:41.805 fused_ordering(708) 00:12:41.805 fused_ordering(709) 00:12:41.805 fused_ordering(710) 00:12:41.805 fused_ordering(711) 00:12:41.805 fused_ordering(712) 00:12:41.805 fused_ordering(713) 00:12:41.805 fused_ordering(714) 00:12:41.805 fused_ordering(715) 00:12:41.805 fused_ordering(716) 00:12:41.805 fused_ordering(717) 00:12:41.805 fused_ordering(718) 00:12:41.805 fused_ordering(719) 00:12:41.805 fused_ordering(720) 00:12:41.805 fused_ordering(721) 00:12:41.805 fused_ordering(722) 00:12:41.805 fused_ordering(723) 00:12:41.805 fused_ordering(724) 00:12:41.805 fused_ordering(725) 00:12:41.805 fused_ordering(726) 00:12:41.805 fused_ordering(727) 00:12:41.805 fused_ordering(728) 00:12:41.805 fused_ordering(729) 00:12:41.805 fused_ordering(730) 00:12:41.805 fused_ordering(731) 00:12:41.805 fused_ordering(732) 00:12:41.805 fused_ordering(733) 00:12:41.805 fused_ordering(734) 00:12:41.805 fused_ordering(735) 00:12:41.805 fused_ordering(736) 00:12:41.805 fused_ordering(737) 00:12:41.805 fused_ordering(738) 00:12:41.805 fused_ordering(739) 00:12:41.805 fused_ordering(740) 00:12:41.805 fused_ordering(741) 00:12:41.805 fused_ordering(742) 00:12:41.805 fused_ordering(743) 00:12:41.805 fused_ordering(744) 00:12:41.805 fused_ordering(745) 00:12:41.805 fused_ordering(746) 00:12:41.805 fused_ordering(747) 00:12:41.805 fused_ordering(748) 00:12:41.805 fused_ordering(749) 00:12:41.805 fused_ordering(750) 00:12:41.805 fused_ordering(751) 00:12:41.805 fused_ordering(752) 00:12:41.805 fused_ordering(753) 00:12:41.805 fused_ordering(754) 00:12:41.805 fused_ordering(755) 00:12:41.805 fused_ordering(756) 00:12:41.805 fused_ordering(757) 00:12:41.805 fused_ordering(758) 00:12:41.805 fused_ordering(759) 00:12:41.805 fused_ordering(760) 00:12:41.805 fused_ordering(761) 00:12:41.805 fused_ordering(762) 00:12:41.805 fused_ordering(763) 00:12:41.805 fused_ordering(764) 00:12:41.805 fused_ordering(765) 00:12:41.805 fused_ordering(766) 00:12:41.805 fused_ordering(767) 00:12:41.805 fused_ordering(768) 00:12:41.805 fused_ordering(769) 00:12:41.805 fused_ordering(770) 00:12:41.805 fused_ordering(771) 00:12:41.805 fused_ordering(772) 00:12:41.805 fused_ordering(773) 00:12:41.805 fused_ordering(774) 00:12:41.805 fused_ordering(775) 00:12:41.805 fused_ordering(776) 00:12:41.805 fused_ordering(777) 00:12:41.805 fused_ordering(778) 00:12:41.805 fused_ordering(779) 00:12:41.805 fused_ordering(780) 00:12:41.805 fused_ordering(781) 00:12:41.805 fused_ordering(782) 00:12:41.805 fused_ordering(783) 00:12:41.805 fused_ordering(784) 00:12:41.805 fused_ordering(785) 00:12:41.805 fused_ordering(786) 00:12:41.805 fused_ordering(787) 00:12:41.805 fused_ordering(788) 00:12:41.805 fused_ordering(789) 00:12:41.805 fused_ordering(790) 00:12:41.805 fused_ordering(791) 00:12:41.805 fused_ordering(792) 00:12:41.805 fused_ordering(793) 00:12:41.805 fused_ordering(794) 00:12:41.805 fused_ordering(795) 00:12:41.805 fused_ordering(796) 00:12:41.805 fused_ordering(797) 00:12:41.805 fused_ordering(798) 00:12:41.805 fused_ordering(799) 00:12:41.805 fused_ordering(800) 00:12:41.805 fused_ordering(801) 00:12:41.805 fused_ordering(802) 00:12:41.805 fused_ordering(803) 00:12:41.805 fused_ordering(804) 00:12:41.805 fused_ordering(805) 00:12:41.805 fused_ordering(806) 00:12:41.805 fused_ordering(807) 00:12:41.805 fused_ordering(808) 00:12:41.805 fused_ordering(809) 00:12:41.805 fused_ordering(810) 00:12:41.805 fused_ordering(811) 00:12:41.805 fused_ordering(812) 00:12:41.805 fused_ordering(813) 00:12:41.805 fused_ordering(814) 00:12:41.805 fused_ordering(815) 00:12:41.805 fused_ordering(816) 00:12:41.805 fused_ordering(817) 00:12:41.805 fused_ordering(818) 00:12:41.805 fused_ordering(819) 00:12:41.805 fused_ordering(820) 00:12:42.065 fused_ordering(821) 00:12:42.065 fused_ordering(822) 00:12:42.065 fused_ordering(823) 00:12:42.065 fused_ordering(824) 00:12:42.065 fused_ordering(825) 00:12:42.065 fused_ordering(826) 00:12:42.065 fused_ordering(827) 00:12:42.065 fused_ordering(828) 00:12:42.065 fused_ordering(829) 00:12:42.065 fused_ordering(830) 00:12:42.065 fused_ordering(831) 00:12:42.065 fused_ordering(832) 00:12:42.065 fused_ordering(833) 00:12:42.065 fused_ordering(834) 00:12:42.065 fused_ordering(835) 00:12:42.065 fused_ordering(836) 00:12:42.065 fused_ordering(837) 00:12:42.065 fused_ordering(838) 00:12:42.065 fused_ordering(839) 00:12:42.065 fused_ordering(840) 00:12:42.065 fused_ordering(841) 00:12:42.065 fused_ordering(842) 00:12:42.065 fused_ordering(843) 00:12:42.065 fused_ordering(844) 00:12:42.065 fused_ordering(845) 00:12:42.065 fused_ordering(846) 00:12:42.065 fused_ordering(847) 00:12:42.065 fused_ordering(848) 00:12:42.065 fused_ordering(849) 00:12:42.065 fused_ordering(850) 00:12:42.065 fused_ordering(851) 00:12:42.065 fused_ordering(852) 00:12:42.065 fused_ordering(853) 00:12:42.065 fused_ordering(854) 00:12:42.065 fused_ordering(855) 00:12:42.065 fused_ordering(856) 00:12:42.065 fused_ordering(857) 00:12:42.065 fused_ordering(858) 00:12:42.065 fused_ordering(859) 00:12:42.065 fused_ordering(860) 00:12:42.065 fused_ordering(861) 00:12:42.065 fused_ordering(862) 00:12:42.065 fused_ordering(863) 00:12:42.065 fused_ordering(864) 00:12:42.065 fused_ordering(865) 00:12:42.065 fused_ordering(866) 00:12:42.065 fused_ordering(867) 00:12:42.065 fused_ordering(868) 00:12:42.065 fused_ordering(869) 00:12:42.065 fused_ordering(870) 00:12:42.065 fused_ordering(871) 00:12:42.065 fused_ordering(872) 00:12:42.065 fused_ordering(873) 00:12:42.065 fused_ordering(874) 00:12:42.065 fused_ordering(875) 00:12:42.065 fused_ordering(876) 00:12:42.065 fused_ordering(877) 00:12:42.065 fused_ordering(878) 00:12:42.065 fused_ordering(879) 00:12:42.065 fused_ordering(880) 00:12:42.065 fused_ordering(881) 00:12:42.065 fused_ordering(882) 00:12:42.065 fused_ordering(883) 00:12:42.065 fused_ordering(884) 00:12:42.065 fused_ordering(885) 00:12:42.065 fused_ordering(886) 00:12:42.065 fused_ordering(887) 00:12:42.065 fused_ordering(888) 00:12:42.065 fused_ordering(889) 00:12:42.065 fused_ordering(890) 00:12:42.065 fused_ordering(891) 00:12:42.065 fused_ordering(892) 00:12:42.065 fused_ordering(893) 00:12:42.065 fused_ordering(894) 00:12:42.065 fused_ordering(895) 00:12:42.065 fused_ordering(896) 00:12:42.065 fused_ordering(897) 00:12:42.065 fused_ordering(898) 00:12:42.065 fused_ordering(899) 00:12:42.065 fused_ordering(900) 00:12:42.065 fused_ordering(901) 00:12:42.065 fused_ordering(902) 00:12:42.065 fused_ordering(903) 00:12:42.065 fused_ordering(904) 00:12:42.065 fused_ordering(905) 00:12:42.065 fused_ordering(906) 00:12:42.065 fused_ordering(907) 00:12:42.065 fused_ordering(908) 00:12:42.065 fused_ordering(909) 00:12:42.065 fused_ordering(910) 00:12:42.065 fused_ordering(911) 00:12:42.065 fused_ordering(912) 00:12:42.065 fused_ordering(913) 00:12:42.065 fused_ordering(914) 00:12:42.065 fused_ordering(915) 00:12:42.065 fused_ordering(916) 00:12:42.065 fused_ordering(917) 00:12:42.065 fused_ordering(918) 00:12:42.065 fused_ordering(919) 00:12:42.065 fused_ordering(920) 00:12:42.065 fused_ordering(921) 00:12:42.065 fused_ordering(922) 00:12:42.065 fused_ordering(923) 00:12:42.065 fused_ordering(924) 00:12:42.065 fused_ordering(925) 00:12:42.065 fused_ordering(926) 00:12:42.065 fused_ordering(927) 00:12:42.065 fused_ordering(928) 00:12:42.065 fused_ordering(929) 00:12:42.065 fused_ordering(930) 00:12:42.065 fused_ordering(931) 00:12:42.065 fused_ordering(932) 00:12:42.065 fused_ordering(933) 00:12:42.065 fused_ordering(934) 00:12:42.065 fused_ordering(935) 00:12:42.065 fused_ordering(936) 00:12:42.065 fused_ordering(937) 00:12:42.065 fused_ordering(938) 00:12:42.065 fused_ordering(939) 00:12:42.065 fused_ordering(940) 00:12:42.065 fused_ordering(941) 00:12:42.065 fused_ordering(942) 00:12:42.065 fused_ordering(943) 00:12:42.065 fused_ordering(944) 00:12:42.065 fused_ordering(945) 00:12:42.065 fused_ordering(946) 00:12:42.065 fused_ordering(947) 00:12:42.065 fused_ordering(948) 00:12:42.065 fused_ordering(949) 00:12:42.065 fused_ordering(950) 00:12:42.065 fused_ordering(951) 00:12:42.065 fused_ordering(952) 00:12:42.066 fused_ordering(953) 00:12:42.066 fused_ordering(954) 00:12:42.066 fused_ordering(955) 00:12:42.066 fused_ordering(956) 00:12:42.066 fused_ordering(957) 00:12:42.066 fused_ordering(958) 00:12:42.066 fused_ordering(959) 00:12:42.066 fused_ordering(960) 00:12:42.066 fused_ordering(961) 00:12:42.066 fused_ordering(962) 00:12:42.066 fused_ordering(963) 00:12:42.066 fused_ordering(964) 00:12:42.066 fused_ordering(965) 00:12:42.066 fused_ordering(966) 00:12:42.066 fused_ordering(967) 00:12:42.066 fused_ordering(968) 00:12:42.066 fused_ordering(969) 00:12:42.066 fused_ordering(970) 00:12:42.066 fused_ordering(971) 00:12:42.066 fused_ordering(972) 00:12:42.066 fused_ordering(973) 00:12:42.066 fused_ordering(974) 00:12:42.066 fused_ordering(975) 00:12:42.066 fused_ordering(976) 00:12:42.066 fused_ordering(977) 00:12:42.066 fused_ordering(978) 00:12:42.066 fused_ordering(979) 00:12:42.066 fused_ordering(980) 00:12:42.066 fused_ordering(981) 00:12:42.066 fused_ordering(982) 00:12:42.066 fused_ordering(983) 00:12:42.066 fused_ordering(984) 00:12:42.066 fused_ordering(985) 00:12:42.066 fused_ordering(986) 00:12:42.066 fused_ordering(987) 00:12:42.066 fused_ordering(988) 00:12:42.066 fused_ordering(989) 00:12:42.066 fused_ordering(990) 00:12:42.066 fused_ordering(991) 00:12:42.066 fused_ordering(992) 00:12:42.066 fused_ordering(993) 00:12:42.066 fused_ordering(994) 00:12:42.066 fused_ordering(995) 00:12:42.066 fused_ordering(996) 00:12:42.066 fused_ordering(997) 00:12:42.066 fused_ordering(998) 00:12:42.066 fused_ordering(999) 00:12:42.066 fused_ordering(1000) 00:12:42.066 fused_ordering(1001) 00:12:42.066 fused_ordering(1002) 00:12:42.066 fused_ordering(1003) 00:12:42.066 fused_ordering(1004) 00:12:42.066 fused_ordering(1005) 00:12:42.066 fused_ordering(1006) 00:12:42.066 fused_ordering(1007) 00:12:42.066 fused_ordering(1008) 00:12:42.066 fused_ordering(1009) 00:12:42.066 fused_ordering(1010) 00:12:42.066 fused_ordering(1011) 00:12:42.066 fused_ordering(1012) 00:12:42.066 fused_ordering(1013) 00:12:42.066 fused_ordering(1014) 00:12:42.066 fused_ordering(1015) 00:12:42.066 fused_ordering(1016) 00:12:42.066 fused_ordering(1017) 00:12:42.066 fused_ordering(1018) 00:12:42.066 fused_ordering(1019) 00:12:42.066 fused_ordering(1020) 00:12:42.066 fused_ordering(1021) 00:12:42.066 fused_ordering(1022) 00:12:42.066 fused_ordering(1023) 00:12:42.066 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:42.066 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:42.066 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:42.066 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:42.066 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:42.066 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:42.066 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:42.066 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:42.066 rmmod nvme_tcp 00:12:42.066 rmmod nvme_fabrics 00:12:42.066 rmmod nvme_keyring 00:12:42.066 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:42.066 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:42.066 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:42.066 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1411833 ']' 00:12:42.066 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1411833 00:12:42.325 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1411833 ']' 00:12:42.325 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1411833 00:12:42.325 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:42.325 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.325 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1411833 00:12:42.325 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:42.325 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:42.325 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1411833' 00:12:42.325 killing process with pid 1411833 00:12:42.325 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1411833 00:12:42.325 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1411833 00:12:42.325 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:42.325 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:42.325 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:42.325 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:42.325 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:42.325 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:42.325 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:42.326 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:42.326 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:42.326 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.326 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.326 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:44.865 00:12:44.865 real 0m10.672s 00:12:44.865 user 0m4.929s 00:12:44.865 sys 0m5.887s 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:44.865 ************************************ 00:12:44.865 END TEST nvmf_fused_ordering 00:12:44.865 ************************************ 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:44.865 ************************************ 00:12:44.865 START TEST nvmf_ns_masking 00:12:44.865 ************************************ 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:44.865 * Looking for test storage... 00:12:44.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.865 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:44.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.866 --rc genhtml_branch_coverage=1 00:12:44.866 --rc genhtml_function_coverage=1 00:12:44.866 --rc genhtml_legend=1 00:12:44.866 --rc geninfo_all_blocks=1 00:12:44.866 --rc geninfo_unexecuted_blocks=1 00:12:44.866 00:12:44.866 ' 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:44.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.866 --rc genhtml_branch_coverage=1 00:12:44.866 --rc genhtml_function_coverage=1 00:12:44.866 --rc genhtml_legend=1 00:12:44.866 --rc geninfo_all_blocks=1 00:12:44.866 --rc geninfo_unexecuted_blocks=1 00:12:44.866 00:12:44.866 ' 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:44.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.866 --rc genhtml_branch_coverage=1 00:12:44.866 --rc genhtml_function_coverage=1 00:12:44.866 --rc genhtml_legend=1 00:12:44.866 --rc geninfo_all_blocks=1 00:12:44.866 --rc geninfo_unexecuted_blocks=1 00:12:44.866 00:12:44.866 ' 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:44.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.866 --rc genhtml_branch_coverage=1 00:12:44.866 --rc genhtml_function_coverage=1 00:12:44.866 --rc genhtml_legend=1 00:12:44.866 --rc geninfo_all_blocks=1 00:12:44.866 --rc geninfo_unexecuted_blocks=1 00:12:44.866 00:12:44.866 ' 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:44.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=8047c647-ecf3-4f58-9fc3-8807dd0de2ba 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=9d98e5fa-1361-4f92-be13-ea41fb3adc70 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=e2b3a6c1-37be-417c-8805-5f189bf58cee 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:44.866 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:51.439 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.439 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:51.439 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:51.439 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:51.439 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:51.439 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:51.439 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:51.439 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:51.439 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:51.439 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:51.439 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:51.440 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:51.440 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:51.440 Found net devices under 0000:86:00.0: cvl_0_0 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:51.440 Found net devices under 0000:86:00.1: cvl_0_1 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:51.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:12:51.440 00:12:51.440 --- 10.0.0.2 ping statistics --- 00:12:51.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.440 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:12:51.440 00:12:51.440 --- 10.0.0.1 ping statistics --- 00:12:51.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.440 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:51.440 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:51.441 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:51.441 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:51.441 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:51.441 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1415802 00:12:51.441 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1415802 00:12:51.441 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:51.441 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1415802 ']' 00:12:51.441 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.441 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.441 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.441 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.441 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:51.441 [2024-11-17 14:22:39.877942] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:12:51.441 [2024-11-17 14:22:39.877987] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.441 [2024-11-17 14:22:39.958097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.441 [2024-11-17 14:22:39.996897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.441 [2024-11-17 14:22:39.996931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.441 [2024-11-17 14:22:39.996938] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.441 [2024-11-17 14:22:39.996943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.441 [2024-11-17 14:22:39.996948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.441 [2024-11-17 14:22:39.997518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.441 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.441 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:51.441 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:51.441 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:51.441 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:51.441 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.441 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:51.441 [2024-11-17 14:22:40.308219] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.441 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:51.441 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:51.441 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:51.441 Malloc1 00:12:51.441 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:51.700 Malloc2 00:12:51.700 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:51.959 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:52.217 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.217 [2024-11-17 14:22:41.348539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.217 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:52.217 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e2b3a6c1-37be-417c-8805-5f189bf58cee -a 10.0.0.2 -s 4420 -i 4 00:12:52.476 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.476 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:52.476 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.476 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:52.476 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:54.383 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:54.383 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:54.383 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.642 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:54.642 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.642 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:54.642 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:54.642 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:54.642 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:54.642 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:54.642 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:54.642 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.642 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:54.642 [ 0]:0x1 00:12:54.642 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:54.642 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.642 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6f800c16863a4aaeabe8e5435a2a9bcd 00:12:54.642 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6f800c16863a4aaeabe8e5435a2a9bcd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.642 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:54.902 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:54.902 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.902 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:54.902 [ 0]:0x1 00:12:54.902 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:54.902 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.902 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6f800c16863a4aaeabe8e5435a2a9bcd 00:12:54.902 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6f800c16863a4aaeabe8e5435a2a9bcd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.902 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:54.902 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.902 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:54.902 [ 1]:0x2 00:12:54.902 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.902 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:54.902 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6ad1176b2cef4f7ca04e6eab7721f9ae 00:12:54.902 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6ad1176b2cef4f7ca04e6eab7721f9ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.902 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:54.902 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.902 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.161 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:55.420 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:55.420 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e2b3a6c1-37be-417c-8805-5f189bf58cee -a 10.0.0.2 -s 4420 -i 4 00:12:55.420 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:55.420 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:55.420 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.420 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:55.420 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:55.420 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:57.954 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:57.954 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:57.954 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.954 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:57.954 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.954 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:57.954 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:57.954 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:57.954 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:57.955 [ 0]:0x2 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6ad1176b2cef4f7ca04e6eab7721f9ae 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6ad1176b2cef4f7ca04e6eab7721f9ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:57.955 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:57.955 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:57.955 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:57.955 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:57.955 [ 0]:0x1 00:12:57.955 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:57.955 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:57.955 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6f800c16863a4aaeabe8e5435a2a9bcd 00:12:57.955 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6f800c16863a4aaeabe8e5435a2a9bcd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:57.955 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:57.955 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:57.955 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:57.955 [ 1]:0x2 00:12:57.955 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:57.955 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:57.955 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6ad1176b2cef4f7ca04e6eab7721f9ae 00:12:57.955 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6ad1176b2cef4f7ca04e6eab7721f9ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:57.955 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:58.214 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:58.214 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:58.214 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:58.214 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:58.214 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.214 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:58.214 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.214 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:58.214 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.214 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:58.214 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:58.214 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.214 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:58.214 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.214 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:58.214 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:58.214 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:58.214 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:58.214 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:58.214 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.214 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:58.473 [ 0]:0x2 00:12:58.473 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:58.473 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.473 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6ad1176b2cef4f7ca04e6eab7721f9ae 00:12:58.473 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6ad1176b2cef4f7ca04e6eab7721f9ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.473 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:58.473 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.473 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:58.732 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:58.732 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e2b3a6c1-37be-417c-8805-5f189bf58cee -a 10.0.0.2 -s 4420 -i 4 00:12:58.732 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:58.732 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:58.732 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.732 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:58.732 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:58.732 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:01.269 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:01.269 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:01.269 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.269 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:01.269 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.269 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:01.269 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:01.269 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:01.269 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:01.269 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:01.269 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:01.270 [ 0]:0x1 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6f800c16863a4aaeabe8e5435a2a9bcd 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6f800c16863a4aaeabe8e5435a2a9bcd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:01.270 [ 1]:0x2 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6ad1176b2cef4f7ca04e6eab7721f9ae 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6ad1176b2cef4f7ca04e6eab7721f9ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:01.270 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:01.530 [ 0]:0x2 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6ad1176b2cef4f7ca04e6eab7721f9ae 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6ad1176b2cef4f7ca04e6eab7721f9ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:01.530 [2024-11-17 14:22:50.718952] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:01.530 request: 00:13:01.530 { 00:13:01.530 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:01.530 "nsid": 2, 00:13:01.530 "host": "nqn.2016-06.io.spdk:host1", 00:13:01.530 "method": "nvmf_ns_remove_host", 00:13:01.530 "req_id": 1 00:13:01.530 } 00:13:01.530 Got JSON-RPC error response 00:13:01.530 response: 00:13:01.530 { 00:13:01.530 "code": -32602, 00:13:01.530 "message": "Invalid parameters" 00:13:01.530 } 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.530 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:01.790 [ 0]:0x2 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6ad1176b2cef4f7ca04e6eab7721f9ae 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6ad1176b2cef4f7ca04e6eab7721f9ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1417794 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1417794 /var/tmp/host.sock 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1417794 ']' 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:01.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:01.790 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:01.790 [2024-11-17 14:22:50.941794] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:01.790 [2024-11-17 14:22:50.941840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1417794 ] 00:13:02.049 [2024-11-17 14:22:51.017085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.049 [2024-11-17 14:22:51.057674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.308 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.308 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:02.308 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.308 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:02.567 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 8047c647-ecf3-4f58-9fc3-8807dd0de2ba 00:13:02.567 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:02.567 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8047C647ECF34F589FC38807DD0DE2BA -i 00:13:02.826 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 9d98e5fa-1361-4f92-be13-ea41fb3adc70 00:13:02.826 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:02.826 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 9D98E5FA13614F92BE13EA41FB3ADC70 -i 00:13:03.085 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:03.344 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:03.344 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:03.344 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:03.602 nvme0n1 00:13:03.861 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:03.861 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:04.120 nvme1n2 00:13:04.120 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:04.120 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:04.120 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:04.120 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:04.120 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:04.379 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:04.379 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:04.379 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:04.379 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:04.637 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 8047c647-ecf3-4f58-9fc3-8807dd0de2ba == \8\0\4\7\c\6\4\7\-\e\c\f\3\-\4\f\5\8\-\9\f\c\3\-\8\8\0\7\d\d\0\d\e\2\b\a ]] 00:13:04.637 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:04.637 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:04.637 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:04.896 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 9d98e5fa-1361-4f92-be13-ea41fb3adc70 == \9\d\9\8\e\5\f\a\-\1\3\6\1\-\4\f\9\2\-\b\e\1\3\-\e\a\4\1\f\b\3\a\d\c\7\0 ]] 00:13:04.896 14:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.896 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:05.156 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 8047c647-ecf3-4f58-9fc3-8807dd0de2ba 00:13:05.156 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:05.156 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8047C647ECF34F589FC38807DD0DE2BA 00:13:05.156 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:05.156 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8047C647ECF34F589FC38807DD0DE2BA 00:13:05.156 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:05.156 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.156 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:05.156 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.156 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:05.156 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.156 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:05.156 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:05.156 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8047C647ECF34F589FC38807DD0DE2BA 00:13:05.415 [2024-11-17 14:22:54.465396] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:05.415 [2024-11-17 14:22:54.465428] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:05.415 [2024-11-17 14:22:54.465436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.415 request: 00:13:05.415 { 00:13:05.415 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:05.415 "namespace": { 00:13:05.415 "bdev_name": "invalid", 00:13:05.415 "nsid": 1, 00:13:05.415 "nguid": "8047C647ECF34F589FC38807DD0DE2BA", 00:13:05.415 "no_auto_visible": false 00:13:05.415 }, 00:13:05.415 "method": "nvmf_subsystem_add_ns", 00:13:05.415 "req_id": 1 00:13:05.415 } 00:13:05.415 Got JSON-RPC error response 00:13:05.415 response: 00:13:05.415 { 00:13:05.415 "code": -32602, 00:13:05.415 "message": "Invalid parameters" 00:13:05.415 } 00:13:05.415 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:05.415 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:05.415 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:05.415 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:05.415 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 8047c647-ecf3-4f58-9fc3-8807dd0de2ba 00:13:05.415 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:05.415 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8047C647ECF34F589FC38807DD0DE2BA -i 00:13:05.674 14:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:07.578 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:07.578 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:07.578 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:07.837 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:07.837 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1417794 00:13:07.837 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1417794 ']' 00:13:07.837 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1417794 00:13:07.837 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:07.837 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.837 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1417794 00:13:07.837 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:07.837 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:07.837 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1417794' 00:13:07.837 killing process with pid 1417794 00:13:07.837 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1417794 00:13:07.837 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1417794 00:13:08.106 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:08.373 rmmod nvme_tcp 00:13:08.373 rmmod nvme_fabrics 00:13:08.373 rmmod nvme_keyring 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1415802 ']' 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1415802 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1415802 ']' 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1415802 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1415802 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1415802' 00:13:08.373 killing process with pid 1415802 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1415802 00:13:08.373 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1415802 00:13:08.633 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:08.633 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:08.633 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:08.633 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:08.633 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:08.633 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:08.633 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:08.633 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:08.633 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:08.633 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.633 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:08.633 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.170 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:11.170 00:13:11.170 real 0m26.214s 00:13:11.170 user 0m31.500s 00:13:11.170 sys 0m7.162s 00:13:11.170 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.170 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:11.170 ************************************ 00:13:11.170 END TEST nvmf_ns_masking 00:13:11.170 ************************************ 00:13:11.170 14:22:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:11.170 14:22:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:11.170 14:22:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:11.170 14:22:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.170 14:22:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:11.170 ************************************ 00:13:11.170 START TEST nvmf_nvme_cli 00:13:11.170 ************************************ 00:13:11.170 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:11.170 * Looking for test storage... 00:13:11.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:11.170 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:11.170 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:13:11.170 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:11.170 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:11.170 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.170 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.170 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.170 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.170 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.170 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.170 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.170 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:11.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.171 --rc genhtml_branch_coverage=1 00:13:11.171 --rc genhtml_function_coverage=1 00:13:11.171 --rc genhtml_legend=1 00:13:11.171 --rc geninfo_all_blocks=1 00:13:11.171 --rc geninfo_unexecuted_blocks=1 00:13:11.171 00:13:11.171 ' 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:11.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.171 --rc genhtml_branch_coverage=1 00:13:11.171 --rc genhtml_function_coverage=1 00:13:11.171 --rc genhtml_legend=1 00:13:11.171 --rc geninfo_all_blocks=1 00:13:11.171 --rc geninfo_unexecuted_blocks=1 00:13:11.171 00:13:11.171 ' 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:11.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.171 --rc genhtml_branch_coverage=1 00:13:11.171 --rc genhtml_function_coverage=1 00:13:11.171 --rc genhtml_legend=1 00:13:11.171 --rc geninfo_all_blocks=1 00:13:11.171 --rc geninfo_unexecuted_blocks=1 00:13:11.171 00:13:11.171 ' 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:11.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.171 --rc genhtml_branch_coverage=1 00:13:11.171 --rc genhtml_function_coverage=1 00:13:11.171 --rc genhtml_legend=1 00:13:11.171 --rc geninfo_all_blocks=1 00:13:11.171 --rc geninfo_unexecuted_blocks=1 00:13:11.171 00:13:11.171 ' 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:11.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:11.171 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:17.745 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:17.746 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:17.746 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:17.746 Found net devices under 0000:86:00.0: cvl_0_0 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:17.746 Found net devices under 0000:86:00.1: cvl_0_1 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:17.746 14:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:17.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:13:17.746 00:13:17.746 --- 10.0.0.2 ping statistics --- 00:13:17.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.746 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:13:17.746 00:13:17.746 --- 10.0.0.1 ping statistics --- 00:13:17.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.746 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1422508 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1422508 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1422508 ']' 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.746 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.746 [2024-11-17 14:23:06.162073] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:17.746 [2024-11-17 14:23:06.162118] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.746 [2024-11-17 14:23:06.238502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:17.746 [2024-11-17 14:23:06.282765] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.746 [2024-11-17 14:23:06.282803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.746 [2024-11-17 14:23:06.282810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.746 [2024-11-17 14:23:06.282817] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.746 [2024-11-17 14:23:06.282822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.746 [2024-11-17 14:23:06.284280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.746 [2024-11-17 14:23:06.284394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.746 [2024-11-17 14:23:06.284439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.747 [2024-11-17 14:23:06.284439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.747 [2024-11-17 14:23:06.420704] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.747 Malloc0 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.747 Malloc1 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.747 [2024-11-17 14:23:06.513688] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:17.747 00:13:17.747 Discovery Log Number of Records 2, Generation counter 2 00:13:17.747 =====Discovery Log Entry 0====== 00:13:17.747 trtype: tcp 00:13:17.747 adrfam: ipv4 00:13:17.747 subtype: current discovery subsystem 00:13:17.747 treq: not required 00:13:17.747 portid: 0 00:13:17.747 trsvcid: 4420 00:13:17.747 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:17.747 traddr: 10.0.0.2 00:13:17.747 eflags: explicit discovery connections, duplicate discovery information 00:13:17.747 sectype: none 00:13:17.747 =====Discovery Log Entry 1====== 00:13:17.747 trtype: tcp 00:13:17.747 adrfam: ipv4 00:13:17.747 subtype: nvme subsystem 00:13:17.747 treq: not required 00:13:17.747 portid: 0 00:13:17.747 trsvcid: 4420 00:13:17.747 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:17.747 traddr: 10.0.0.2 00:13:17.747 eflags: none 00:13:17.747 sectype: none 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:17.747 14:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.124 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:19.124 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:19.124 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.124 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:19.124 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:19.124 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:21.044 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:21.044 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:21.044 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.044 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:21.044 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.044 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:21.044 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:21.044 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:21.044 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:21.044 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:21.044 /dev/nvme0n2 ]] 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:21.044 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.303 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:21.303 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:21.562 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:21.562 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.562 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:21.562 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.562 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:21.563 rmmod nvme_tcp 00:13:21.563 rmmod nvme_fabrics 00:13:21.563 rmmod nvme_keyring 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1422508 ']' 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1422508 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1422508 ']' 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1422508 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1422508 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1422508' 00:13:21.563 killing process with pid 1422508 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1422508 00:13:21.563 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1422508 00:13:21.822 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:21.822 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:21.822 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:21.822 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:21.822 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:21.822 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:21.822 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:21.822 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:21.822 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:21.822 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.822 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.822 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.360 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:24.360 00:13:24.360 real 0m13.038s 00:13:24.360 user 0m20.114s 00:13:24.360 sys 0m5.101s 00:13:24.360 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.360 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:24.360 ************************************ 00:13:24.360 END TEST nvmf_nvme_cli 00:13:24.360 ************************************ 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:24.360 ************************************ 00:13:24.360 START TEST nvmf_vfio_user 00:13:24.360 ************************************ 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:24.360 * Looking for test storage... 00:13:24.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:24.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.360 --rc genhtml_branch_coverage=1 00:13:24.360 --rc genhtml_function_coverage=1 00:13:24.360 --rc genhtml_legend=1 00:13:24.360 --rc geninfo_all_blocks=1 00:13:24.360 --rc geninfo_unexecuted_blocks=1 00:13:24.360 00:13:24.360 ' 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:24.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.360 --rc genhtml_branch_coverage=1 00:13:24.360 --rc genhtml_function_coverage=1 00:13:24.360 --rc genhtml_legend=1 00:13:24.360 --rc geninfo_all_blocks=1 00:13:24.360 --rc geninfo_unexecuted_blocks=1 00:13:24.360 00:13:24.360 ' 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:24.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.360 --rc genhtml_branch_coverage=1 00:13:24.360 --rc genhtml_function_coverage=1 00:13:24.360 --rc genhtml_legend=1 00:13:24.360 --rc geninfo_all_blocks=1 00:13:24.360 --rc geninfo_unexecuted_blocks=1 00:13:24.360 00:13:24.360 ' 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:24.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.360 --rc genhtml_branch_coverage=1 00:13:24.360 --rc genhtml_function_coverage=1 00:13:24.360 --rc genhtml_legend=1 00:13:24.360 --rc geninfo_all_blocks=1 00:13:24.360 --rc geninfo_unexecuted_blocks=1 00:13:24.360 00:13:24.360 ' 00:13:24.360 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:24.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1423805 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1423805' 00:13:24.361 Process pid: 1423805 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1423805 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1423805 ']' 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:24.361 [2024-11-17 14:23:13.306279] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:24.361 [2024-11-17 14:23:13.306327] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.361 [2024-11-17 14:23:13.380710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.361 [2024-11-17 14:23:13.420350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.361 [2024-11-17 14:23:13.420393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.361 [2024-11-17 14:23:13.420400] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.361 [2024-11-17 14:23:13.420406] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.361 [2024-11-17 14:23:13.420410] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.361 [2024-11-17 14:23:13.422043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.361 [2024-11-17 14:23:13.422172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.361 [2024-11-17 14:23:13.422280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.361 [2024-11-17 14:23:13.422281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:24.361 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:25.746 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:25.746 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:25.746 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:25.746 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:25.746 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:25.746 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:25.746 Malloc1 00:13:25.746 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:26.005 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:26.264 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:26.523 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:26.523 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:26.523 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:26.781 Malloc2 00:13:26.781 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:26.781 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:27.040 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:27.300 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:27.300 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:27.300 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:27.300 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:27.300 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:27.300 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:27.300 [2024-11-17 14:23:16.401771] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:27.300 [2024-11-17 14:23:16.401802] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1424290 ] 00:13:27.300 [2024-11-17 14:23:16.445472] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:27.300 [2024-11-17 14:23:16.456361] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:27.300 [2024-11-17 14:23:16.456383] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ffb6ae30000 00:13:27.300 [2024-11-17 14:23:16.456639] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:27.300 [2024-11-17 14:23:16.457641] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:27.300 [2024-11-17 14:23:16.458644] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:27.300 [2024-11-17 14:23:16.459641] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:27.300 [2024-11-17 14:23:16.460654] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:27.300 [2024-11-17 14:23:16.461664] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:27.300 [2024-11-17 14:23:16.462659] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:27.300 [2024-11-17 14:23:16.463667] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:27.300 [2024-11-17 14:23:16.464673] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:27.300 [2024-11-17 14:23:16.464683] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ffb6ae25000 00:13:27.300 [2024-11-17 14:23:16.465626] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:27.300 [2024-11-17 14:23:16.478237] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:27.300 [2024-11-17 14:23:16.478261] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:27.300 [2024-11-17 14:23:16.480766] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:27.300 [2024-11-17 14:23:16.480804] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:27.300 [2024-11-17 14:23:16.480876] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:27.301 [2024-11-17 14:23:16.480891] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:27.301 [2024-11-17 14:23:16.480896] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:27.301 [2024-11-17 14:23:16.481770] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:27.301 [2024-11-17 14:23:16.481779] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:27.301 [2024-11-17 14:23:16.481786] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:27.301 [2024-11-17 14:23:16.482769] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:27.301 [2024-11-17 14:23:16.482777] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:27.301 [2024-11-17 14:23:16.482784] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:27.301 [2024-11-17 14:23:16.483774] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:27.301 [2024-11-17 14:23:16.483782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:27.301 [2024-11-17 14:23:16.484778] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:27.301 [2024-11-17 14:23:16.484787] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:27.301 [2024-11-17 14:23:16.484792] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:27.301 [2024-11-17 14:23:16.484798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:27.301 [2024-11-17 14:23:16.484905] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:27.301 [2024-11-17 14:23:16.484912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:27.301 [2024-11-17 14:23:16.484917] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:27.301 [2024-11-17 14:23:16.485788] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:27.301 [2024-11-17 14:23:16.486786] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:27.301 [2024-11-17 14:23:16.487791] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:27.301 [2024-11-17 14:23:16.488791] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:27.301 [2024-11-17 14:23:16.488863] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:27.301 [2024-11-17 14:23:16.489805] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:27.301 [2024-11-17 14:23:16.489813] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:27.301 [2024-11-17 14:23:16.489817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:27.301 [2024-11-17 14:23:16.489834] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:27.301 [2024-11-17 14:23:16.489844] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:27.301 [2024-11-17 14:23:16.489859] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:27.301 [2024-11-17 14:23:16.489864] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:27.301 [2024-11-17 14:23:16.489868] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:27.301 [2024-11-17 14:23:16.489881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:27.301 [2024-11-17 14:23:16.489927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:27.301 [2024-11-17 14:23:16.489935] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:27.301 [2024-11-17 14:23:16.489940] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:27.301 [2024-11-17 14:23:16.489945] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:27.301 [2024-11-17 14:23:16.489949] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:27.301 [2024-11-17 14:23:16.489955] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:27.301 [2024-11-17 14:23:16.489959] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:27.301 [2024-11-17 14:23:16.489964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:27.301 [2024-11-17 14:23:16.489972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:27.301 [2024-11-17 14:23:16.489984] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:27.301 [2024-11-17 14:23:16.489995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:27.301 [2024-11-17 14:23:16.490006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.301 [2024-11-17 14:23:16.490014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.301 [2024-11-17 14:23:16.490021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.301 [2024-11-17 14:23:16.490028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.301 [2024-11-17 14:23:16.490033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:27.301 [2024-11-17 14:23:16.490039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:27.301 [2024-11-17 14:23:16.490047] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:27.301 [2024-11-17 14:23:16.490057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:27.301 [2024-11-17 14:23:16.490063] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:27.301 [2024-11-17 14:23:16.490068] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:27.301 [2024-11-17 14:23:16.490074] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:27.301 [2024-11-17 14:23:16.490079] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:27.301 [2024-11-17 14:23:16.490087] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:27.301 [2024-11-17 14:23:16.490099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:27.301 [2024-11-17 14:23:16.490151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:27.301 [2024-11-17 14:23:16.490158] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:27.301 [2024-11-17 14:23:16.490165] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:27.301 [2024-11-17 14:23:16.490169] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:27.301 [2024-11-17 14:23:16.490172] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:27.301 [2024-11-17 14:23:16.490178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:27.301 [2024-11-17 14:23:16.490194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:27.301 [2024-11-17 14:23:16.490203] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:27.301 [2024-11-17 14:23:16.490211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:27.301 [2024-11-17 14:23:16.490219] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:27.301 [2024-11-17 14:23:16.490225] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:27.301 [2024-11-17 14:23:16.490229] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:27.301 [2024-11-17 14:23:16.490232] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:27.301 [2024-11-17 14:23:16.490237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:27.301 [2024-11-17 14:23:16.490254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:27.301 [2024-11-17 14:23:16.490265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:27.301 [2024-11-17 14:23:16.490272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:27.301 [2024-11-17 14:23:16.490278] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:27.301 [2024-11-17 14:23:16.490282] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:27.301 [2024-11-17 14:23:16.490285] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:27.301 [2024-11-17 14:23:16.490290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:27.302 [2024-11-17 14:23:16.490307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:27.302 [2024-11-17 14:23:16.490314] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:27.302 [2024-11-17 14:23:16.490320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:27.302 [2024-11-17 14:23:16.490327] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:27.302 [2024-11-17 14:23:16.490332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:27.302 [2024-11-17 14:23:16.490336] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:27.302 [2024-11-17 14:23:16.490341] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:27.302 [2024-11-17 14:23:16.490345] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:27.302 [2024-11-17 14:23:16.490349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:27.302 [2024-11-17 14:23:16.490358] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:27.302 [2024-11-17 14:23:16.490375] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:27.302 [2024-11-17 14:23:16.490386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:27.302 [2024-11-17 14:23:16.490397] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:27.302 [2024-11-17 14:23:16.490411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:27.302 [2024-11-17 14:23:16.490422] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:27.302 [2024-11-17 14:23:16.490433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:27.302 [2024-11-17 14:23:16.490442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:27.302 [2024-11-17 14:23:16.490451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:27.302 [2024-11-17 14:23:16.490462] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:27.302 [2024-11-17 14:23:16.490466] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:27.302 [2024-11-17 14:23:16.490469] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:27.302 [2024-11-17 14:23:16.490472] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:27.302 [2024-11-17 14:23:16.490475] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:27.302 [2024-11-17 14:23:16.490481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:27.302 [2024-11-17 14:23:16.490488] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:27.302 [2024-11-17 14:23:16.490491] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:27.302 [2024-11-17 14:23:16.490494] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:27.302 [2024-11-17 14:23:16.490500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:27.302 [2024-11-17 14:23:16.490506] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:27.302 [2024-11-17 14:23:16.490510] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:27.302 [2024-11-17 14:23:16.490513] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:27.302 [2024-11-17 14:23:16.490518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:27.302 [2024-11-17 14:23:16.490525] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:27.302 [2024-11-17 14:23:16.490529] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:27.302 [2024-11-17 14:23:16.490532] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:27.302 [2024-11-17 14:23:16.490537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:27.302 [2024-11-17 14:23:16.490543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:27.302 [2024-11-17 14:23:16.490555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:27.302 [2024-11-17 14:23:16.490564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:27.302 [2024-11-17 14:23:16.490570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:27.302 ===================================================== 00:13:27.302 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:27.302 ===================================================== 00:13:27.302 Controller Capabilities/Features 00:13:27.302 ================================ 00:13:27.302 Vendor ID: 4e58 00:13:27.302 Subsystem Vendor ID: 4e58 00:13:27.302 Serial Number: SPDK1 00:13:27.302 Model Number: SPDK bdev Controller 00:13:27.302 Firmware Version: 25.01 00:13:27.302 Recommended Arb Burst: 6 00:13:27.302 IEEE OUI Identifier: 8d 6b 50 00:13:27.302 Multi-path I/O 00:13:27.302 May have multiple subsystem ports: Yes 00:13:27.302 May have multiple controllers: Yes 00:13:27.302 Associated with SR-IOV VF: No 00:13:27.302 Max Data Transfer Size: 131072 00:13:27.302 Max Number of Namespaces: 32 00:13:27.302 Max Number of I/O Queues: 127 00:13:27.302 NVMe Specification Version (VS): 1.3 00:13:27.302 NVMe Specification Version (Identify): 1.3 00:13:27.302 Maximum Queue Entries: 256 00:13:27.302 Contiguous Queues Required: Yes 00:13:27.302 Arbitration Mechanisms Supported 00:13:27.302 Weighted Round Robin: Not Supported 00:13:27.302 Vendor Specific: Not Supported 00:13:27.302 Reset Timeout: 15000 ms 00:13:27.302 Doorbell Stride: 4 bytes 00:13:27.302 NVM Subsystem Reset: Not Supported 00:13:27.302 Command Sets Supported 00:13:27.302 NVM Command Set: Supported 00:13:27.302 Boot Partition: Not Supported 00:13:27.302 Memory Page Size Minimum: 4096 bytes 00:13:27.302 Memory Page Size Maximum: 4096 bytes 00:13:27.302 Persistent Memory Region: Not Supported 00:13:27.302 Optional Asynchronous Events Supported 00:13:27.302 Namespace Attribute Notices: Supported 00:13:27.302 Firmware Activation Notices: Not Supported 00:13:27.302 ANA Change Notices: Not Supported 00:13:27.302 PLE Aggregate Log Change Notices: Not Supported 00:13:27.302 LBA Status Info Alert Notices: Not Supported 00:13:27.302 EGE Aggregate Log Change Notices: Not Supported 00:13:27.302 Normal NVM Subsystem Shutdown event: Not Supported 00:13:27.302 Zone Descriptor Change Notices: Not Supported 00:13:27.302 Discovery Log Change Notices: Not Supported 00:13:27.302 Controller Attributes 00:13:27.302 128-bit Host Identifier: Supported 00:13:27.302 Non-Operational Permissive Mode: Not Supported 00:13:27.302 NVM Sets: Not Supported 00:13:27.302 Read Recovery Levels: Not Supported 00:13:27.302 Endurance Groups: Not Supported 00:13:27.302 Predictable Latency Mode: Not Supported 00:13:27.302 Traffic Based Keep ALive: Not Supported 00:13:27.302 Namespace Granularity: Not Supported 00:13:27.302 SQ Associations: Not Supported 00:13:27.302 UUID List: Not Supported 00:13:27.302 Multi-Domain Subsystem: Not Supported 00:13:27.302 Fixed Capacity Management: Not Supported 00:13:27.302 Variable Capacity Management: Not Supported 00:13:27.302 Delete Endurance Group: Not Supported 00:13:27.302 Delete NVM Set: Not Supported 00:13:27.302 Extended LBA Formats Supported: Not Supported 00:13:27.302 Flexible Data Placement Supported: Not Supported 00:13:27.302 00:13:27.302 Controller Memory Buffer Support 00:13:27.302 ================================ 00:13:27.302 Supported: No 00:13:27.302 00:13:27.302 Persistent Memory Region Support 00:13:27.303 ================================ 00:13:27.303 Supported: No 00:13:27.303 00:13:27.303 Admin Command Set Attributes 00:13:27.303 ============================ 00:13:27.303 Security Send/Receive: Not Supported 00:13:27.303 Format NVM: Not Supported 00:13:27.303 Firmware Activate/Download: Not Supported 00:13:27.303 Namespace Management: Not Supported 00:13:27.303 Device Self-Test: Not Supported 00:13:27.303 Directives: Not Supported 00:13:27.303 NVMe-MI: Not Supported 00:13:27.303 Virtualization Management: Not Supported 00:13:27.303 Doorbell Buffer Config: Not Supported 00:13:27.303 Get LBA Status Capability: Not Supported 00:13:27.303 Command & Feature Lockdown Capability: Not Supported 00:13:27.303 Abort Command Limit: 4 00:13:27.303 Async Event Request Limit: 4 00:13:27.303 Number of Firmware Slots: N/A 00:13:27.303 Firmware Slot 1 Read-Only: N/A 00:13:27.303 Firmware Activation Without Reset: N/A 00:13:27.303 Multiple Update Detection Support: N/A 00:13:27.303 Firmware Update Granularity: No Information Provided 00:13:27.303 Per-Namespace SMART Log: No 00:13:27.303 Asymmetric Namespace Access Log Page: Not Supported 00:13:27.303 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:27.303 Command Effects Log Page: Supported 00:13:27.303 Get Log Page Extended Data: Supported 00:13:27.303 Telemetry Log Pages: Not Supported 00:13:27.303 Persistent Event Log Pages: Not Supported 00:13:27.303 Supported Log Pages Log Page: May Support 00:13:27.303 Commands Supported & Effects Log Page: Not Supported 00:13:27.303 Feature Identifiers & Effects Log Page:May Support 00:13:27.303 NVMe-MI Commands & Effects Log Page: May Support 00:13:27.303 Data Area 4 for Telemetry Log: Not Supported 00:13:27.303 Error Log Page Entries Supported: 128 00:13:27.303 Keep Alive: Supported 00:13:27.303 Keep Alive Granularity: 10000 ms 00:13:27.303 00:13:27.303 NVM Command Set Attributes 00:13:27.303 ========================== 00:13:27.303 Submission Queue Entry Size 00:13:27.303 Max: 64 00:13:27.303 Min: 64 00:13:27.303 Completion Queue Entry Size 00:13:27.303 Max: 16 00:13:27.303 Min: 16 00:13:27.303 Number of Namespaces: 32 00:13:27.303 Compare Command: Supported 00:13:27.303 Write Uncorrectable Command: Not Supported 00:13:27.303 Dataset Management Command: Supported 00:13:27.303 Write Zeroes Command: Supported 00:13:27.303 Set Features Save Field: Not Supported 00:13:27.303 Reservations: Not Supported 00:13:27.303 Timestamp: Not Supported 00:13:27.303 Copy: Supported 00:13:27.303 Volatile Write Cache: Present 00:13:27.303 Atomic Write Unit (Normal): 1 00:13:27.303 Atomic Write Unit (PFail): 1 00:13:27.303 Atomic Compare & Write Unit: 1 00:13:27.303 Fused Compare & Write: Supported 00:13:27.303 Scatter-Gather List 00:13:27.303 SGL Command Set: Supported (Dword aligned) 00:13:27.303 SGL Keyed: Not Supported 00:13:27.303 SGL Bit Bucket Descriptor: Not Supported 00:13:27.303 SGL Metadata Pointer: Not Supported 00:13:27.303 Oversized SGL: Not Supported 00:13:27.303 SGL Metadata Address: Not Supported 00:13:27.303 SGL Offset: Not Supported 00:13:27.303 Transport SGL Data Block: Not Supported 00:13:27.303 Replay Protected Memory Block: Not Supported 00:13:27.303 00:13:27.303 Firmware Slot Information 00:13:27.303 ========================= 00:13:27.303 Active slot: 1 00:13:27.303 Slot 1 Firmware Revision: 25.01 00:13:27.303 00:13:27.303 00:13:27.303 Commands Supported and Effects 00:13:27.303 ============================== 00:13:27.303 Admin Commands 00:13:27.303 -------------- 00:13:27.303 Get Log Page (02h): Supported 00:13:27.303 Identify (06h): Supported 00:13:27.303 Abort (08h): Supported 00:13:27.303 Set Features (09h): Supported 00:13:27.303 Get Features (0Ah): Supported 00:13:27.303 Asynchronous Event Request (0Ch): Supported 00:13:27.303 Keep Alive (18h): Supported 00:13:27.303 I/O Commands 00:13:27.303 ------------ 00:13:27.303 Flush (00h): Supported LBA-Change 00:13:27.303 Write (01h): Supported LBA-Change 00:13:27.303 Read (02h): Supported 00:13:27.303 Compare (05h): Supported 00:13:27.303 Write Zeroes (08h): Supported LBA-Change 00:13:27.303 Dataset Management (09h): Supported LBA-Change 00:13:27.303 Copy (19h): Supported LBA-Change 00:13:27.303 00:13:27.303 Error Log 00:13:27.303 ========= 00:13:27.303 00:13:27.303 Arbitration 00:13:27.303 =========== 00:13:27.303 Arbitration Burst: 1 00:13:27.303 00:13:27.303 Power Management 00:13:27.303 ================ 00:13:27.303 Number of Power States: 1 00:13:27.303 Current Power State: Power State #0 00:13:27.303 Power State #0: 00:13:27.303 Max Power: 0.00 W 00:13:27.303 Non-Operational State: Operational 00:13:27.303 Entry Latency: Not Reported 00:13:27.303 Exit Latency: Not Reported 00:13:27.303 Relative Read Throughput: 0 00:13:27.303 Relative Read Latency: 0 00:13:27.303 Relative Write Throughput: 0 00:13:27.303 Relative Write Latency: 0 00:13:27.303 Idle Power: Not Reported 00:13:27.303 Active Power: Not Reported 00:13:27.303 Non-Operational Permissive Mode: Not Supported 00:13:27.303 00:13:27.303 Health Information 00:13:27.303 ================== 00:13:27.303 Critical Warnings: 00:13:27.303 Available Spare Space: OK 00:13:27.303 Temperature: OK 00:13:27.303 Device Reliability: OK 00:13:27.303 Read Only: No 00:13:27.303 Volatile Memory Backup: OK 00:13:27.303 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:27.303 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:27.303 Available Spare: 0% 00:13:27.303 Available Sp[2024-11-17 14:23:16.490656] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:27.303 [2024-11-17 14:23:16.490672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:27.303 [2024-11-17 14:23:16.490697] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:27.303 [2024-11-17 14:23:16.490707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.303 [2024-11-17 14:23:16.490712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.303 [2024-11-17 14:23:16.490718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.303 [2024-11-17 14:23:16.490724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.303 [2024-11-17 14:23:16.493360] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:27.303 [2024-11-17 14:23:16.493370] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:27.303 [2024-11-17 14:23:16.493821] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:27.303 [2024-11-17 14:23:16.493870] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:27.303 [2024-11-17 14:23:16.493877] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:27.303 [2024-11-17 14:23:16.494829] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:27.303 [2024-11-17 14:23:16.494840] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:27.303 [2024-11-17 14:23:16.494888] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:27.303 [2024-11-17 14:23:16.496862] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:27.569 are Threshold: 0% 00:13:27.570 Life Percentage Used: 0% 00:13:27.570 Data Units Read: 0 00:13:27.570 Data Units Written: 0 00:13:27.570 Host Read Commands: 0 00:13:27.570 Host Write Commands: 0 00:13:27.570 Controller Busy Time: 0 minutes 00:13:27.570 Power Cycles: 0 00:13:27.570 Power On Hours: 0 hours 00:13:27.570 Unsafe Shutdowns: 0 00:13:27.570 Unrecoverable Media Errors: 0 00:13:27.570 Lifetime Error Log Entries: 0 00:13:27.570 Warning Temperature Time: 0 minutes 00:13:27.570 Critical Temperature Time: 0 minutes 00:13:27.570 00:13:27.570 Number of Queues 00:13:27.570 ================ 00:13:27.570 Number of I/O Submission Queues: 127 00:13:27.570 Number of I/O Completion Queues: 127 00:13:27.570 00:13:27.570 Active Namespaces 00:13:27.570 ================= 00:13:27.570 Namespace ID:1 00:13:27.570 Error Recovery Timeout: Unlimited 00:13:27.570 Command Set Identifier: NVM (00h) 00:13:27.570 Deallocate: Supported 00:13:27.570 Deallocated/Unwritten Error: Not Supported 00:13:27.570 Deallocated Read Value: Unknown 00:13:27.570 Deallocate in Write Zeroes: Not Supported 00:13:27.570 Deallocated Guard Field: 0xFFFF 00:13:27.570 Flush: Supported 00:13:27.570 Reservation: Supported 00:13:27.570 Namespace Sharing Capabilities: Multiple Controllers 00:13:27.570 Size (in LBAs): 131072 (0GiB) 00:13:27.570 Capacity (in LBAs): 131072 (0GiB) 00:13:27.570 Utilization (in LBAs): 131072 (0GiB) 00:13:27.570 NGUID: 62707CCB079740D4998B2DBBB7793631 00:13:27.570 UUID: 62707ccb-0797-40d4-998b-2dbbb7793631 00:13:27.570 Thin Provisioning: Not Supported 00:13:27.570 Per-NS Atomic Units: Yes 00:13:27.570 Atomic Boundary Size (Normal): 0 00:13:27.570 Atomic Boundary Size (PFail): 0 00:13:27.570 Atomic Boundary Offset: 0 00:13:27.570 Maximum Single Source Range Length: 65535 00:13:27.570 Maximum Copy Length: 65535 00:13:27.570 Maximum Source Range Count: 1 00:13:27.570 NGUID/EUI64 Never Reused: No 00:13:27.570 Namespace Write Protected: No 00:13:27.570 Number of LBA Formats: 1 00:13:27.570 Current LBA Format: LBA Format #00 00:13:27.570 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:27.570 00:13:27.570 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:27.570 [2024-11-17 14:23:16.714130] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:33.046 Initializing NVMe Controllers 00:13:33.046 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:33.046 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:33.046 Initialization complete. Launching workers. 00:13:33.046 ======================================================== 00:13:33.046 Latency(us) 00:13:33.046 Device Information : IOPS MiB/s Average min max 00:13:33.046 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39951.34 156.06 3204.16 966.92 8093.44 00:13:33.046 ======================================================== 00:13:33.046 Total : 39951.34 156.06 3204.16 966.92 8093.44 00:13:33.046 00:13:33.046 [2024-11-17 14:23:21.735615] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:33.046 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:33.046 [2024-11-17 14:23:21.973763] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:38.316 Initializing NVMe Controllers 00:13:38.316 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:38.316 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:38.316 Initialization complete. Launching workers. 00:13:38.316 ======================================================== 00:13:38.316 Latency(us) 00:13:38.316 Device Information : IOPS MiB/s Average min max 00:13:38.316 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.05 62.70 7979.89 7778.36 8069.34 00:13:38.316 ======================================================== 00:13:38.316 Total : 16051.05 62.70 7979.89 7778.36 8069.34 00:13:38.316 00:13:38.316 [2024-11-17 14:23:27.016029] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:38.316 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:38.316 [2024-11-17 14:23:27.220008] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:43.587 [2024-11-17 14:23:32.315796] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:43.587 Initializing NVMe Controllers 00:13:43.587 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:43.587 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:43.587 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:43.587 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:43.587 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:43.587 Initialization complete. Launching workers. 00:13:43.587 Starting thread on core 2 00:13:43.587 Starting thread on core 3 00:13:43.587 Starting thread on core 1 00:13:43.587 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:43.587 [2024-11-17 14:23:32.609206] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:46.876 [2024-11-17 14:23:35.665965] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:46.876 Initializing NVMe Controllers 00:13:46.876 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:46.876 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:46.876 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:46.876 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:46.876 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:46.876 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:46.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:46.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:46.876 Initialization complete. Launching workers. 00:13:46.876 Starting thread on core 1 with urgent priority queue 00:13:46.876 Starting thread on core 2 with urgent priority queue 00:13:46.876 Starting thread on core 3 with urgent priority queue 00:13:46.876 Starting thread on core 0 with urgent priority queue 00:13:46.876 SPDK bdev Controller (SPDK1 ) core 0: 8183.33 IO/s 12.22 secs/100000 ios 00:13:46.876 SPDK bdev Controller (SPDK1 ) core 1: 9073.33 IO/s 11.02 secs/100000 ios 00:13:46.876 SPDK bdev Controller (SPDK1 ) core 2: 8075.00 IO/s 12.38 secs/100000 ios 00:13:46.876 SPDK bdev Controller (SPDK1 ) core 3: 8629.33 IO/s 11.59 secs/100000 ios 00:13:46.876 ======================================================== 00:13:46.876 00:13:46.876 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:46.876 [2024-11-17 14:23:35.954459] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:46.876 Initializing NVMe Controllers 00:13:46.876 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:46.876 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:46.876 Namespace ID: 1 size: 0GB 00:13:46.876 Initialization complete. 00:13:46.876 INFO: using host memory buffer for IO 00:13:46.876 Hello world! 00:13:46.876 [2024-11-17 14:23:35.990723] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:46.876 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:47.135 [2024-11-17 14:23:36.277783] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:48.071 Initializing NVMe Controllers 00:13:48.071 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:48.071 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:48.071 Initialization complete. Launching workers. 00:13:48.071 submit (in ns) avg, min, max = 7468.2, 3243.5, 3999848.7 00:13:48.071 complete (in ns) avg, min, max = 18939.8, 1771.3, 4030934.8 00:13:48.071 00:13:48.071 Submit histogram 00:13:48.071 ================ 00:13:48.071 Range in us Cumulative Count 00:13:48.071 3.242 - 3.256: 0.0124% ( 2) 00:13:48.071 3.256 - 3.270: 0.0185% ( 1) 00:13:48.071 3.270 - 3.283: 0.0494% ( 5) 00:13:48.071 3.283 - 3.297: 0.2286% ( 29) 00:13:48.071 3.297 - 3.311: 0.4695% ( 39) 00:13:48.071 3.311 - 3.325: 0.7352% ( 43) 00:13:48.071 3.325 - 3.339: 1.2912% ( 90) 00:13:48.071 3.339 - 3.353: 2.0017% ( 115) 00:13:48.071 3.353 - 3.367: 3.9479% ( 315) 00:13:48.071 3.367 - 3.381: 8.5938% ( 752) 00:13:48.071 3.381 - 3.395: 14.1419% ( 898) 00:13:48.071 3.395 - 3.409: 20.1841% ( 978) 00:13:48.071 3.409 - 3.423: 26.4364% ( 1012) 00:13:48.071 3.423 - 3.437: 32.7258% ( 1018) 00:13:48.071 3.437 - 3.450: 37.7240% ( 809) 00:13:48.071 3.450 - 3.464: 43.1978% ( 886) 00:13:48.071 3.464 - 3.478: 47.9303% ( 766) 00:13:48.071 3.478 - 3.492: 52.3539% ( 716) 00:13:48.071 3.492 - 3.506: 56.3512% ( 647) 00:13:48.071 3.506 - 3.520: 62.1895% ( 945) 00:13:48.071 3.520 - 3.534: 68.9979% ( 1102) 00:13:48.071 3.534 - 3.548: 73.0879% ( 662) 00:13:48.071 3.548 - 3.562: 77.8451% ( 770) 00:13:48.071 3.562 - 3.590: 85.0365% ( 1164) 00:13:48.071 3.590 - 3.617: 87.4089% ( 384) 00:13:48.071 3.617 - 3.645: 88.0514% ( 104) 00:13:48.071 3.645 - 3.673: 89.0770% ( 166) 00:13:48.071 3.673 - 3.701: 90.5783% ( 243) 00:13:48.071 3.701 - 3.729: 92.2340% ( 268) 00:13:48.071 3.729 - 3.757: 93.8589% ( 263) 00:13:48.071 3.757 - 3.784: 95.6011% ( 282) 00:13:48.071 3.784 - 3.812: 97.1457% ( 250) 00:13:48.071 3.812 - 3.840: 98.1898% ( 169) 00:13:48.071 3.840 - 3.868: 98.8076% ( 100) 00:13:48.071 3.868 - 3.896: 99.1721% ( 59) 00:13:48.071 3.896 - 3.923: 99.4193% ( 40) 00:13:48.071 3.923 - 3.951: 99.5305% ( 18) 00:13:48.071 3.951 - 3.979: 99.5613% ( 5) 00:13:48.071 3.979 - 4.007: 99.5984% ( 6) 00:13:48.071 4.063 - 4.090: 99.6046% ( 1) 00:13:48.071 4.146 - 4.174: 99.6108% ( 1) 00:13:48.071 4.341 - 4.369: 99.6170% ( 1) 00:13:48.071 4.536 - 4.563: 99.6231% ( 1) 00:13:48.071 5.565 - 5.593: 99.6293% ( 1) 00:13:48.071 5.649 - 5.677: 99.6355% ( 1) 00:13:48.071 5.788 - 5.816: 99.6417% ( 1) 00:13:48.071 5.927 - 5.955: 99.6478% ( 1) 00:13:48.071 6.010 - 6.038: 99.6540% ( 1) 00:13:48.071 6.289 - 6.317: 99.6602% ( 1) 00:13:48.071 6.317 - 6.344: 99.6664% ( 1) 00:13:48.071 6.344 - 6.372: 99.6726% ( 1) 00:13:48.071 6.428 - 6.456: 99.6849% ( 2) 00:13:48.071 6.567 - 6.595: 99.6911% ( 1) 00:13:48.071 6.595 - 6.623: 99.6973% ( 1) 00:13:48.071 6.623 - 6.650: 99.7034% ( 1) 00:13:48.071 6.650 - 6.678: 99.7096% ( 1) 00:13:48.071 6.706 - 6.734: 99.7158% ( 1) 00:13:48.071 6.984 - 7.012: 99.7220% ( 1) 00:13:48.071 7.179 - 7.235: 99.7282% ( 1) 00:13:48.071 7.402 - 7.457: 99.7343% ( 1) 00:13:48.071 7.457 - 7.513: 99.7405% ( 1) 00:13:48.071 7.513 - 7.569: 99.7529% ( 2) 00:13:48.071 7.624 - 7.680: 99.7714% ( 3) 00:13:48.071 7.736 - 7.791: 99.7838% ( 2) 00:13:48.071 7.791 - 7.847: 99.7899% ( 1) 00:13:48.071 7.903 - 7.958: 99.7961% ( 1) 00:13:48.071 7.958 - 8.014: 99.8023% ( 1) 00:13:48.071 8.014 - 8.070: 99.8147% ( 2) 00:13:48.071 8.070 - 8.125: 99.8208% ( 1) 00:13:48.071 8.181 - 8.237: 99.8270% ( 1) 00:13:48.071 8.237 - 8.292: 99.8394% ( 2) 00:13:48.071 8.737 - 8.793: 99.8455% ( 1) 00:13:48.071 9.127 - 9.183: 99.8517% ( 1) 00:13:48.071 9.294 - 9.350: 99.8579% ( 1) 00:13:48.071 9.461 - 9.517: 99.8641% ( 1) 00:13:48.071 9.628 - 9.683: 99.8703% ( 1) 00:13:48.071 13.802 - 13.857: 99.8764% ( 1) 00:13:48.071 14.136 - 14.191: 99.8826% ( 1) 00:13:48.071 19.367 - 19.478: 99.8888% ( 1) 00:13:48.330 [2024-11-17 14:23:37.297808] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:48.330 19.478 - 19.590: 99.8950% ( 1) 00:13:48.330 40.737 - 40.960: 99.9011% ( 1) 00:13:48.330 3989.148 - 4017.642: 100.0000% ( 16) 00:13:48.330 00:13:48.330 Complete histogram 00:13:48.330 ================== 00:13:48.330 Range in us Cumulative Count 00:13:48.330 1.767 - 1.774: 0.0062% ( 1) 00:13:48.330 1.774 - 1.781: 0.0371% ( 5) 00:13:48.330 1.781 - 1.795: 0.0989% ( 10) 00:13:48.330 1.795 - 1.809: 0.1297% ( 5) 00:13:48.330 1.809 - 1.823: 0.7785% ( 105) 00:13:48.330 1.823 - 1.837: 4.5595% ( 612) 00:13:48.330 1.837 - 1.850: 6.9505% ( 387) 00:13:48.330 1.850 - 1.864: 7.7660% ( 132) 00:13:48.330 1.864 - 1.878: 22.6121% ( 2403) 00:13:48.330 1.878 - 1.892: 73.9034% ( 8302) 00:13:48.330 1.892 - 1.906: 90.4485% ( 2678) 00:13:48.330 1.906 - 1.920: 94.5570% ( 665) 00:13:48.330 1.920 - 1.934: 95.8174% ( 204) 00:13:48.330 1.934 - 1.948: 96.5464% ( 118) 00:13:48.330 1.948 - 1.962: 97.7759% ( 199) 00:13:48.330 1.962 - 1.976: 98.8570% ( 175) 00:13:48.330 1.976 - 1.990: 99.1103% ( 41) 00:13:48.330 1.990 - 2.003: 99.1907% ( 13) 00:13:48.330 2.003 - 2.017: 99.1968% ( 1) 00:13:48.330 2.017 - 2.031: 99.2030% ( 1) 00:13:48.330 2.045 - 2.059: 99.2092% ( 1) 00:13:48.330 2.073 - 2.087: 99.2277% ( 3) 00:13:48.330 2.101 - 2.115: 99.2339% ( 1) 00:13:48.330 2.129 - 2.143: 99.2401% ( 1) 00:13:48.330 2.143 - 2.157: 99.2524% ( 2) 00:13:48.330 2.157 - 2.170: 99.2586% ( 1) 00:13:48.330 2.170 - 2.184: 99.2710% ( 2) 00:13:48.330 2.198 - 2.212: 99.2772% ( 1) 00:13:48.330 2.212 - 2.226: 99.2833% ( 1) 00:13:48.330 2.254 - 2.268: 99.2895% ( 1) 00:13:48.330 2.268 - 2.282: 99.3019% ( 2) 00:13:48.330 2.296 - 2.310: 99.3080% ( 1) 00:13:48.330 2.310 - 2.323: 99.3142% ( 1) 00:13:48.331 2.351 - 2.365: 99.3328% ( 3) 00:13:48.331 2.379 - 2.393: 99.3389% ( 1) 00:13:48.331 2.407 - 2.421: 99.3451% ( 1) 00:13:48.331 2.435 - 2.449: 99.3513% ( 1) 00:13:48.331 2.477 - 2.490: 99.3575% ( 1) 00:13:48.331 4.063 - 4.090: 99.3636% ( 1) 00:13:48.331 4.341 - 4.369: 99.3698% ( 1) 00:13:48.331 4.369 - 4.397: 99.3760% ( 1) 00:13:48.331 4.480 - 4.508: 99.3822% ( 1) 00:13:48.331 4.786 - 4.814: 99.3884% ( 1) 00:13:48.331 4.981 - 5.009: 99.3945% ( 1) 00:13:48.331 5.037 - 5.064: 99.4007% ( 1) 00:13:48.331 5.148 - 5.176: 99.4069% ( 1) 00:13:48.331 5.231 - 5.259: 99.4131% ( 1) 00:13:48.331 5.398 - 5.426: 99.4193% ( 1) 00:13:48.331 5.426 - 5.454: 99.4254% ( 1) 00:13:48.331 5.732 - 5.760: 99.4316% ( 1) 00:13:48.331 5.760 - 5.788: 99.4440% ( 2) 00:13:48.331 5.816 - 5.843: 99.4501% ( 1) 00:13:48.331 5.899 - 5.927: 99.4563% ( 1) 00:13:48.331 5.955 - 5.983: 99.4625% ( 1) 00:13:48.331 6.038 - 6.066: 99.4687% ( 1) 00:13:48.331 6.372 - 6.400: 99.4749% ( 1) 00:13:48.331 6.428 - 6.456: 99.4810% ( 1) 00:13:48.331 6.456 - 6.483: 99.4872% ( 1) 00:13:48.331 6.539 - 6.567: 99.4934% ( 1) 00:13:48.331 6.678 - 6.706: 99.4996% ( 1) 00:13:48.331 6.734 - 6.762: 99.5057% ( 1) 00:13:48.331 6.984 - 7.012: 99.5119% ( 1) 00:13:48.331 7.123 - 7.179: 99.5181% ( 1) 00:13:48.331 7.179 - 7.235: 99.5243% ( 1) 00:13:48.331 7.680 - 7.736: 99.5305% ( 1) 00:13:48.331 7.736 - 7.791: 99.5366% ( 1) 00:13:48.331 8.125 - 8.181: 99.5428% ( 1) 00:13:48.331 8.682 - 8.737: 99.5490% ( 1) 00:13:48.331 8.904 - 8.960: 99.5552% ( 1) 00:13:48.331 10.574 - 10.630: 99.5613% ( 1) 00:13:48.331 17.586 - 17.697: 99.5675% ( 1) 00:13:48.331 162.950 - 163.840: 99.5737% ( 1) 00:13:48.331 3989.148 - 4017.642: 99.9938% ( 68) 00:13:48.331 4017.642 - 4046.136: 100.0000% ( 1) 00:13:48.331 00:13:48.331 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:48.331 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:48.331 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:48.331 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:48.331 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:48.331 [ 00:13:48.331 { 00:13:48.331 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:48.331 "subtype": "Discovery", 00:13:48.331 "listen_addresses": [], 00:13:48.331 "allow_any_host": true, 00:13:48.331 "hosts": [] 00:13:48.331 }, 00:13:48.331 { 00:13:48.331 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:48.331 "subtype": "NVMe", 00:13:48.331 "listen_addresses": [ 00:13:48.331 { 00:13:48.331 "trtype": "VFIOUSER", 00:13:48.331 "adrfam": "IPv4", 00:13:48.331 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:48.331 "trsvcid": "0" 00:13:48.331 } 00:13:48.331 ], 00:13:48.331 "allow_any_host": true, 00:13:48.331 "hosts": [], 00:13:48.331 "serial_number": "SPDK1", 00:13:48.331 "model_number": "SPDK bdev Controller", 00:13:48.331 "max_namespaces": 32, 00:13:48.331 "min_cntlid": 1, 00:13:48.331 "max_cntlid": 65519, 00:13:48.331 "namespaces": [ 00:13:48.331 { 00:13:48.331 "nsid": 1, 00:13:48.331 "bdev_name": "Malloc1", 00:13:48.331 "name": "Malloc1", 00:13:48.331 "nguid": "62707CCB079740D4998B2DBBB7793631", 00:13:48.331 "uuid": "62707ccb-0797-40d4-998b-2dbbb7793631" 00:13:48.331 } 00:13:48.331 ] 00:13:48.331 }, 00:13:48.331 { 00:13:48.331 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:48.331 "subtype": "NVMe", 00:13:48.331 "listen_addresses": [ 00:13:48.331 { 00:13:48.331 "trtype": "VFIOUSER", 00:13:48.331 "adrfam": "IPv4", 00:13:48.331 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:48.331 "trsvcid": "0" 00:13:48.331 } 00:13:48.331 ], 00:13:48.331 "allow_any_host": true, 00:13:48.331 "hosts": [], 00:13:48.331 "serial_number": "SPDK2", 00:13:48.331 "model_number": "SPDK bdev Controller", 00:13:48.331 "max_namespaces": 32, 00:13:48.331 "min_cntlid": 1, 00:13:48.331 "max_cntlid": 65519, 00:13:48.331 "namespaces": [ 00:13:48.331 { 00:13:48.331 "nsid": 1, 00:13:48.331 "bdev_name": "Malloc2", 00:13:48.331 "name": "Malloc2", 00:13:48.331 "nguid": "9196083A0A694925A96DF430DFE122DA", 00:13:48.331 "uuid": "9196083a-0a69-4925-a96d-f430dfe122da" 00:13:48.331 } 00:13:48.331 ] 00:13:48.331 } 00:13:48.331 ] 00:13:48.331 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:48.331 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1427751 00:13:48.331 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:48.331 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:48.331 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:48.331 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:48.331 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:48.331 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:48.331 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:48.331 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:48.590 [2024-11-17 14:23:37.706789] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:48.590 Malloc3 00:13:48.590 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:48.848 [2024-11-17 14:23:37.942669] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:48.848 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:48.848 Asynchronous Event Request test 00:13:48.848 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:48.848 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:48.848 Registering asynchronous event callbacks... 00:13:48.848 Starting namespace attribute notice tests for all controllers... 00:13:48.849 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:48.849 aer_cb - Changed Namespace 00:13:48.849 Cleaning up... 00:13:49.108 [ 00:13:49.108 { 00:13:49.108 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:49.108 "subtype": "Discovery", 00:13:49.108 "listen_addresses": [], 00:13:49.108 "allow_any_host": true, 00:13:49.108 "hosts": [] 00:13:49.108 }, 00:13:49.108 { 00:13:49.108 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:49.108 "subtype": "NVMe", 00:13:49.108 "listen_addresses": [ 00:13:49.108 { 00:13:49.108 "trtype": "VFIOUSER", 00:13:49.108 "adrfam": "IPv4", 00:13:49.108 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:49.108 "trsvcid": "0" 00:13:49.108 } 00:13:49.108 ], 00:13:49.108 "allow_any_host": true, 00:13:49.108 "hosts": [], 00:13:49.108 "serial_number": "SPDK1", 00:13:49.108 "model_number": "SPDK bdev Controller", 00:13:49.108 "max_namespaces": 32, 00:13:49.108 "min_cntlid": 1, 00:13:49.108 "max_cntlid": 65519, 00:13:49.108 "namespaces": [ 00:13:49.108 { 00:13:49.108 "nsid": 1, 00:13:49.108 "bdev_name": "Malloc1", 00:13:49.108 "name": "Malloc1", 00:13:49.108 "nguid": "62707CCB079740D4998B2DBBB7793631", 00:13:49.108 "uuid": "62707ccb-0797-40d4-998b-2dbbb7793631" 00:13:49.108 }, 00:13:49.108 { 00:13:49.108 "nsid": 2, 00:13:49.108 "bdev_name": "Malloc3", 00:13:49.108 "name": "Malloc3", 00:13:49.108 "nguid": "ABFC13933B804B829B67E2617FFD09E5", 00:13:49.108 "uuid": "abfc1393-3b80-4b82-9b67-e2617ffd09e5" 00:13:49.108 } 00:13:49.108 ] 00:13:49.108 }, 00:13:49.108 { 00:13:49.108 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:49.108 "subtype": "NVMe", 00:13:49.108 "listen_addresses": [ 00:13:49.108 { 00:13:49.108 "trtype": "VFIOUSER", 00:13:49.108 "adrfam": "IPv4", 00:13:49.108 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:49.108 "trsvcid": "0" 00:13:49.108 } 00:13:49.108 ], 00:13:49.108 "allow_any_host": true, 00:13:49.108 "hosts": [], 00:13:49.108 "serial_number": "SPDK2", 00:13:49.108 "model_number": "SPDK bdev Controller", 00:13:49.108 "max_namespaces": 32, 00:13:49.108 "min_cntlid": 1, 00:13:49.108 "max_cntlid": 65519, 00:13:49.108 "namespaces": [ 00:13:49.108 { 00:13:49.109 "nsid": 1, 00:13:49.109 "bdev_name": "Malloc2", 00:13:49.109 "name": "Malloc2", 00:13:49.109 "nguid": "9196083A0A694925A96DF430DFE122DA", 00:13:49.109 "uuid": "9196083a-0a69-4925-a96d-f430dfe122da" 00:13:49.109 } 00:13:49.109 ] 00:13:49.109 } 00:13:49.109 ] 00:13:49.109 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1427751 00:13:49.109 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:49.109 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:49.109 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:49.109 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:49.109 [2024-11-17 14:23:38.192399] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:49.109 [2024-11-17 14:23:38.192437] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1427888 ] 00:13:49.109 [2024-11-17 14:23:38.233198] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:49.109 [2024-11-17 14:23:38.241590] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:49.109 [2024-11-17 14:23:38.241617] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe528e64000 00:13:49.109 [2024-11-17 14:23:38.242596] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:49.109 [2024-11-17 14:23:38.243597] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:49.109 [2024-11-17 14:23:38.244600] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:49.109 [2024-11-17 14:23:38.245615] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:49.109 [2024-11-17 14:23:38.246624] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:49.109 [2024-11-17 14:23:38.247632] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:49.109 [2024-11-17 14:23:38.248641] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:49.109 [2024-11-17 14:23:38.249649] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:49.109 [2024-11-17 14:23:38.250653] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:49.109 [2024-11-17 14:23:38.250667] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe528e59000 00:13:49.109 [2024-11-17 14:23:38.251612] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:49.109 [2024-11-17 14:23:38.261135] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:49.109 [2024-11-17 14:23:38.261160] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:49.109 [2024-11-17 14:23:38.266256] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:49.109 [2024-11-17 14:23:38.266298] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:49.109 [2024-11-17 14:23:38.266374] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:49.109 [2024-11-17 14:23:38.266388] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:49.109 [2024-11-17 14:23:38.266393] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:49.109 [2024-11-17 14:23:38.267267] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:49.109 [2024-11-17 14:23:38.267277] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:49.109 [2024-11-17 14:23:38.267283] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:49.109 [2024-11-17 14:23:38.268271] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:49.109 [2024-11-17 14:23:38.268281] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:49.109 [2024-11-17 14:23:38.268288] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:49.109 [2024-11-17 14:23:38.269272] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:49.109 [2024-11-17 14:23:38.269282] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:49.109 [2024-11-17 14:23:38.270280] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:49.109 [2024-11-17 14:23:38.270289] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:49.109 [2024-11-17 14:23:38.270294] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:49.109 [2024-11-17 14:23:38.270300] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:49.109 [2024-11-17 14:23:38.270407] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:49.109 [2024-11-17 14:23:38.270412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:49.109 [2024-11-17 14:23:38.270417] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:49.109 [2024-11-17 14:23:38.271288] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:49.109 [2024-11-17 14:23:38.272292] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:49.109 [2024-11-17 14:23:38.273301] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:49.109 [2024-11-17 14:23:38.274305] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:49.109 [2024-11-17 14:23:38.274344] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:49.109 [2024-11-17 14:23:38.275316] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:49.109 [2024-11-17 14:23:38.275325] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:49.109 [2024-11-17 14:23:38.275330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:49.109 [2024-11-17 14:23:38.275346] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:49.109 [2024-11-17 14:23:38.275357] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:49.109 [2024-11-17 14:23:38.275369] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:49.109 [2024-11-17 14:23:38.275374] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:49.109 [2024-11-17 14:23:38.275377] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:49.109 [2024-11-17 14:23:38.275390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:49.109 [2024-11-17 14:23:38.283360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:49.109 [2024-11-17 14:23:38.283372] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:49.109 [2024-11-17 14:23:38.283377] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:49.109 [2024-11-17 14:23:38.283380] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:49.109 [2024-11-17 14:23:38.283385] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:49.109 [2024-11-17 14:23:38.283392] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:49.109 [2024-11-17 14:23:38.283396] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:49.109 [2024-11-17 14:23:38.283400] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:49.109 [2024-11-17 14:23:38.283408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:49.109 [2024-11-17 14:23:38.283419] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:49.109 [2024-11-17 14:23:38.291357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:49.109 [2024-11-17 14:23:38.291369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.109 [2024-11-17 14:23:38.291379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.109 [2024-11-17 14:23:38.291387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.109 [2024-11-17 14:23:38.291394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.109 [2024-11-17 14:23:38.291399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:49.109 [2024-11-17 14:23:38.291405] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:49.109 [2024-11-17 14:23:38.291413] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:49.110 [2024-11-17 14:23:38.299359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:49.110 [2024-11-17 14:23:38.299369] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:49.110 [2024-11-17 14:23:38.299374] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:49.110 [2024-11-17 14:23:38.299380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:49.110 [2024-11-17 14:23:38.299386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:49.110 [2024-11-17 14:23:38.299394] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:49.110 [2024-11-17 14:23:38.307359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:49.110 [2024-11-17 14:23:38.307417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:49.110 [2024-11-17 14:23:38.307425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:49.110 [2024-11-17 14:23:38.307432] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:49.110 [2024-11-17 14:23:38.307437] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:49.110 [2024-11-17 14:23:38.307440] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:49.110 [2024-11-17 14:23:38.307446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:49.110 [2024-11-17 14:23:38.315357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:49.110 [2024-11-17 14:23:38.315368] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:49.110 [2024-11-17 14:23:38.315376] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:49.110 [2024-11-17 14:23:38.315384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:49.110 [2024-11-17 14:23:38.315390] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:49.110 [2024-11-17 14:23:38.315394] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:49.110 [2024-11-17 14:23:38.315399] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:49.110 [2024-11-17 14:23:38.315405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:49.110 [2024-11-17 14:23:38.323358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:49.110 [2024-11-17 14:23:38.323371] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:49.110 [2024-11-17 14:23:38.323378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:49.110 [2024-11-17 14:23:38.323385] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:49.110 [2024-11-17 14:23:38.323389] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:49.110 [2024-11-17 14:23:38.323392] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:49.110 [2024-11-17 14:23:38.323397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:49.369 [2024-11-17 14:23:38.331357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:49.369 [2024-11-17 14:23:38.331371] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:49.369 [2024-11-17 14:23:38.331377] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:49.369 [2024-11-17 14:23:38.331386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:49.369 [2024-11-17 14:23:38.331391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:49.369 [2024-11-17 14:23:38.331396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:49.369 [2024-11-17 14:23:38.331401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:49.369 [2024-11-17 14:23:38.331405] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:49.369 [2024-11-17 14:23:38.331409] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:49.369 [2024-11-17 14:23:38.331414] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:49.369 [2024-11-17 14:23:38.331430] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:49.369 [2024-11-17 14:23:38.339356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:49.369 [2024-11-17 14:23:38.339371] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:49.369 [2024-11-17 14:23:38.347358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:49.369 [2024-11-17 14:23:38.347370] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:49.369 [2024-11-17 14:23:38.355356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:49.369 [2024-11-17 14:23:38.355371] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:49.369 [2024-11-17 14:23:38.363357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:49.369 [2024-11-17 14:23:38.363372] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:49.369 [2024-11-17 14:23:38.363377] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:49.369 [2024-11-17 14:23:38.363380] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:49.369 [2024-11-17 14:23:38.363383] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:49.369 [2024-11-17 14:23:38.363386] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:49.369 [2024-11-17 14:23:38.363392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:49.369 [2024-11-17 14:23:38.363398] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:49.369 [2024-11-17 14:23:38.363402] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:49.369 [2024-11-17 14:23:38.363405] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:49.369 [2024-11-17 14:23:38.363411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:49.369 [2024-11-17 14:23:38.363417] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:49.369 [2024-11-17 14:23:38.363421] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:49.370 [2024-11-17 14:23:38.363424] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:49.370 [2024-11-17 14:23:38.363430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:49.370 [2024-11-17 14:23:38.363437] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:49.370 [2024-11-17 14:23:38.363440] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:49.370 [2024-11-17 14:23:38.363444] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:49.370 [2024-11-17 14:23:38.363449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:49.370 [2024-11-17 14:23:38.375358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:49.370 [2024-11-17 14:23:38.375374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:49.370 [2024-11-17 14:23:38.375383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:49.370 [2024-11-17 14:23:38.375389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:49.370 ===================================================== 00:13:49.370 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:49.370 ===================================================== 00:13:49.370 Controller Capabilities/Features 00:13:49.370 ================================ 00:13:49.370 Vendor ID: 4e58 00:13:49.370 Subsystem Vendor ID: 4e58 00:13:49.370 Serial Number: SPDK2 00:13:49.370 Model Number: SPDK bdev Controller 00:13:49.370 Firmware Version: 25.01 00:13:49.370 Recommended Arb Burst: 6 00:13:49.370 IEEE OUI Identifier: 8d 6b 50 00:13:49.370 Multi-path I/O 00:13:49.370 May have multiple subsystem ports: Yes 00:13:49.370 May have multiple controllers: Yes 00:13:49.370 Associated with SR-IOV VF: No 00:13:49.370 Max Data Transfer Size: 131072 00:13:49.370 Max Number of Namespaces: 32 00:13:49.370 Max Number of I/O Queues: 127 00:13:49.370 NVMe Specification Version (VS): 1.3 00:13:49.370 NVMe Specification Version (Identify): 1.3 00:13:49.370 Maximum Queue Entries: 256 00:13:49.370 Contiguous Queues Required: Yes 00:13:49.370 Arbitration Mechanisms Supported 00:13:49.370 Weighted Round Robin: Not Supported 00:13:49.370 Vendor Specific: Not Supported 00:13:49.370 Reset Timeout: 15000 ms 00:13:49.370 Doorbell Stride: 4 bytes 00:13:49.370 NVM Subsystem Reset: Not Supported 00:13:49.370 Command Sets Supported 00:13:49.370 NVM Command Set: Supported 00:13:49.370 Boot Partition: Not Supported 00:13:49.370 Memory Page Size Minimum: 4096 bytes 00:13:49.370 Memory Page Size Maximum: 4096 bytes 00:13:49.370 Persistent Memory Region: Not Supported 00:13:49.370 Optional Asynchronous Events Supported 00:13:49.370 Namespace Attribute Notices: Supported 00:13:49.370 Firmware Activation Notices: Not Supported 00:13:49.370 ANA Change Notices: Not Supported 00:13:49.370 PLE Aggregate Log Change Notices: Not Supported 00:13:49.370 LBA Status Info Alert Notices: Not Supported 00:13:49.370 EGE Aggregate Log Change Notices: Not Supported 00:13:49.370 Normal NVM Subsystem Shutdown event: Not Supported 00:13:49.370 Zone Descriptor Change Notices: Not Supported 00:13:49.370 Discovery Log Change Notices: Not Supported 00:13:49.370 Controller Attributes 00:13:49.370 128-bit Host Identifier: Supported 00:13:49.370 Non-Operational Permissive Mode: Not Supported 00:13:49.370 NVM Sets: Not Supported 00:13:49.370 Read Recovery Levels: Not Supported 00:13:49.370 Endurance Groups: Not Supported 00:13:49.370 Predictable Latency Mode: Not Supported 00:13:49.370 Traffic Based Keep ALive: Not Supported 00:13:49.370 Namespace Granularity: Not Supported 00:13:49.370 SQ Associations: Not Supported 00:13:49.370 UUID List: Not Supported 00:13:49.370 Multi-Domain Subsystem: Not Supported 00:13:49.370 Fixed Capacity Management: Not Supported 00:13:49.370 Variable Capacity Management: Not Supported 00:13:49.370 Delete Endurance Group: Not Supported 00:13:49.370 Delete NVM Set: Not Supported 00:13:49.370 Extended LBA Formats Supported: Not Supported 00:13:49.370 Flexible Data Placement Supported: Not Supported 00:13:49.370 00:13:49.370 Controller Memory Buffer Support 00:13:49.370 ================================ 00:13:49.370 Supported: No 00:13:49.370 00:13:49.370 Persistent Memory Region Support 00:13:49.370 ================================ 00:13:49.370 Supported: No 00:13:49.370 00:13:49.370 Admin Command Set Attributes 00:13:49.370 ============================ 00:13:49.370 Security Send/Receive: Not Supported 00:13:49.370 Format NVM: Not Supported 00:13:49.370 Firmware Activate/Download: Not Supported 00:13:49.370 Namespace Management: Not Supported 00:13:49.370 Device Self-Test: Not Supported 00:13:49.370 Directives: Not Supported 00:13:49.370 NVMe-MI: Not Supported 00:13:49.370 Virtualization Management: Not Supported 00:13:49.370 Doorbell Buffer Config: Not Supported 00:13:49.370 Get LBA Status Capability: Not Supported 00:13:49.370 Command & Feature Lockdown Capability: Not Supported 00:13:49.370 Abort Command Limit: 4 00:13:49.370 Async Event Request Limit: 4 00:13:49.370 Number of Firmware Slots: N/A 00:13:49.370 Firmware Slot 1 Read-Only: N/A 00:13:49.370 Firmware Activation Without Reset: N/A 00:13:49.370 Multiple Update Detection Support: N/A 00:13:49.370 Firmware Update Granularity: No Information Provided 00:13:49.370 Per-Namespace SMART Log: No 00:13:49.370 Asymmetric Namespace Access Log Page: Not Supported 00:13:49.370 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:49.370 Command Effects Log Page: Supported 00:13:49.370 Get Log Page Extended Data: Supported 00:13:49.370 Telemetry Log Pages: Not Supported 00:13:49.370 Persistent Event Log Pages: Not Supported 00:13:49.370 Supported Log Pages Log Page: May Support 00:13:49.370 Commands Supported & Effects Log Page: Not Supported 00:13:49.370 Feature Identifiers & Effects Log Page:May Support 00:13:49.370 NVMe-MI Commands & Effects Log Page: May Support 00:13:49.370 Data Area 4 for Telemetry Log: Not Supported 00:13:49.370 Error Log Page Entries Supported: 128 00:13:49.370 Keep Alive: Supported 00:13:49.370 Keep Alive Granularity: 10000 ms 00:13:49.370 00:13:49.370 NVM Command Set Attributes 00:13:49.370 ========================== 00:13:49.370 Submission Queue Entry Size 00:13:49.370 Max: 64 00:13:49.370 Min: 64 00:13:49.370 Completion Queue Entry Size 00:13:49.370 Max: 16 00:13:49.370 Min: 16 00:13:49.370 Number of Namespaces: 32 00:13:49.370 Compare Command: Supported 00:13:49.370 Write Uncorrectable Command: Not Supported 00:13:49.370 Dataset Management Command: Supported 00:13:49.370 Write Zeroes Command: Supported 00:13:49.370 Set Features Save Field: Not Supported 00:13:49.370 Reservations: Not Supported 00:13:49.370 Timestamp: Not Supported 00:13:49.370 Copy: Supported 00:13:49.370 Volatile Write Cache: Present 00:13:49.370 Atomic Write Unit (Normal): 1 00:13:49.370 Atomic Write Unit (PFail): 1 00:13:49.370 Atomic Compare & Write Unit: 1 00:13:49.370 Fused Compare & Write: Supported 00:13:49.370 Scatter-Gather List 00:13:49.370 SGL Command Set: Supported (Dword aligned) 00:13:49.370 SGL Keyed: Not Supported 00:13:49.370 SGL Bit Bucket Descriptor: Not Supported 00:13:49.370 SGL Metadata Pointer: Not Supported 00:13:49.370 Oversized SGL: Not Supported 00:13:49.370 SGL Metadata Address: Not Supported 00:13:49.370 SGL Offset: Not Supported 00:13:49.370 Transport SGL Data Block: Not Supported 00:13:49.370 Replay Protected Memory Block: Not Supported 00:13:49.370 00:13:49.370 Firmware Slot Information 00:13:49.370 ========================= 00:13:49.370 Active slot: 1 00:13:49.370 Slot 1 Firmware Revision: 25.01 00:13:49.370 00:13:49.370 00:13:49.370 Commands Supported and Effects 00:13:49.370 ============================== 00:13:49.370 Admin Commands 00:13:49.370 -------------- 00:13:49.370 Get Log Page (02h): Supported 00:13:49.370 Identify (06h): Supported 00:13:49.370 Abort (08h): Supported 00:13:49.370 Set Features (09h): Supported 00:13:49.370 Get Features (0Ah): Supported 00:13:49.370 Asynchronous Event Request (0Ch): Supported 00:13:49.370 Keep Alive (18h): Supported 00:13:49.370 I/O Commands 00:13:49.370 ------------ 00:13:49.370 Flush (00h): Supported LBA-Change 00:13:49.370 Write (01h): Supported LBA-Change 00:13:49.370 Read (02h): Supported 00:13:49.370 Compare (05h): Supported 00:13:49.370 Write Zeroes (08h): Supported LBA-Change 00:13:49.370 Dataset Management (09h): Supported LBA-Change 00:13:49.370 Copy (19h): Supported LBA-Change 00:13:49.370 00:13:49.370 Error Log 00:13:49.370 ========= 00:13:49.370 00:13:49.370 Arbitration 00:13:49.370 =========== 00:13:49.370 Arbitration Burst: 1 00:13:49.370 00:13:49.371 Power Management 00:13:49.371 ================ 00:13:49.371 Number of Power States: 1 00:13:49.371 Current Power State: Power State #0 00:13:49.371 Power State #0: 00:13:49.371 Max Power: 0.00 W 00:13:49.371 Non-Operational State: Operational 00:13:49.371 Entry Latency: Not Reported 00:13:49.371 Exit Latency: Not Reported 00:13:49.371 Relative Read Throughput: 0 00:13:49.371 Relative Read Latency: 0 00:13:49.371 Relative Write Throughput: 0 00:13:49.371 Relative Write Latency: 0 00:13:49.371 Idle Power: Not Reported 00:13:49.371 Active Power: Not Reported 00:13:49.371 Non-Operational Permissive Mode: Not Supported 00:13:49.371 00:13:49.371 Health Information 00:13:49.371 ================== 00:13:49.371 Critical Warnings: 00:13:49.371 Available Spare Space: OK 00:13:49.371 Temperature: OK 00:13:49.371 Device Reliability: OK 00:13:49.371 Read Only: No 00:13:49.371 Volatile Memory Backup: OK 00:13:49.371 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:49.371 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:49.371 Available Spare: 0% 00:13:49.371 Available Sp[2024-11-17 14:23:38.375479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:49.371 [2024-11-17 14:23:38.383356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:49.371 [2024-11-17 14:23:38.383384] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:49.371 [2024-11-17 14:23:38.383392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.371 [2024-11-17 14:23:38.383398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.371 [2024-11-17 14:23:38.383405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.371 [2024-11-17 14:23:38.383411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.371 [2024-11-17 14:23:38.387357] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:49.371 [2024-11-17 14:23:38.387367] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:49.371 [2024-11-17 14:23:38.387501] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:49.371 [2024-11-17 14:23:38.387544] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:49.371 [2024-11-17 14:23:38.387550] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:49.371 [2024-11-17 14:23:38.388509] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:49.371 [2024-11-17 14:23:38.388520] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:49.371 [2024-11-17 14:23:38.388570] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:49.371 [2024-11-17 14:23:38.389547] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:49.371 are Threshold: 0% 00:13:49.371 Life Percentage Used: 0% 00:13:49.371 Data Units Read: 0 00:13:49.371 Data Units Written: 0 00:13:49.371 Host Read Commands: 0 00:13:49.371 Host Write Commands: 0 00:13:49.371 Controller Busy Time: 0 minutes 00:13:49.371 Power Cycles: 0 00:13:49.371 Power On Hours: 0 hours 00:13:49.371 Unsafe Shutdowns: 0 00:13:49.371 Unrecoverable Media Errors: 0 00:13:49.371 Lifetime Error Log Entries: 0 00:13:49.371 Warning Temperature Time: 0 minutes 00:13:49.371 Critical Temperature Time: 0 minutes 00:13:49.371 00:13:49.371 Number of Queues 00:13:49.371 ================ 00:13:49.371 Number of I/O Submission Queues: 127 00:13:49.371 Number of I/O Completion Queues: 127 00:13:49.371 00:13:49.371 Active Namespaces 00:13:49.371 ================= 00:13:49.371 Namespace ID:1 00:13:49.371 Error Recovery Timeout: Unlimited 00:13:49.371 Command Set Identifier: NVM (00h) 00:13:49.371 Deallocate: Supported 00:13:49.371 Deallocated/Unwritten Error: Not Supported 00:13:49.371 Deallocated Read Value: Unknown 00:13:49.371 Deallocate in Write Zeroes: Not Supported 00:13:49.371 Deallocated Guard Field: 0xFFFF 00:13:49.371 Flush: Supported 00:13:49.371 Reservation: Supported 00:13:49.371 Namespace Sharing Capabilities: Multiple Controllers 00:13:49.371 Size (in LBAs): 131072 (0GiB) 00:13:49.371 Capacity (in LBAs): 131072 (0GiB) 00:13:49.371 Utilization (in LBAs): 131072 (0GiB) 00:13:49.371 NGUID: 9196083A0A694925A96DF430DFE122DA 00:13:49.371 UUID: 9196083a-0a69-4925-a96d-f430dfe122da 00:13:49.371 Thin Provisioning: Not Supported 00:13:49.371 Per-NS Atomic Units: Yes 00:13:49.371 Atomic Boundary Size (Normal): 0 00:13:49.371 Atomic Boundary Size (PFail): 0 00:13:49.371 Atomic Boundary Offset: 0 00:13:49.371 Maximum Single Source Range Length: 65535 00:13:49.371 Maximum Copy Length: 65535 00:13:49.371 Maximum Source Range Count: 1 00:13:49.371 NGUID/EUI64 Never Reused: No 00:13:49.371 Namespace Write Protected: No 00:13:49.371 Number of LBA Formats: 1 00:13:49.371 Current LBA Format: LBA Format #00 00:13:49.371 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:49.371 00:13:49.371 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:49.629 [2024-11-17 14:23:38.623902] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:54.896 Initializing NVMe Controllers 00:13:54.896 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:54.896 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:54.896 Initialization complete. Launching workers. 00:13:54.896 ======================================================== 00:13:54.896 Latency(us) 00:13:54.896 Device Information : IOPS MiB/s Average min max 00:13:54.896 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39941.21 156.02 3204.53 955.14 6779.68 00:13:54.896 ======================================================== 00:13:54.896 Total : 39941.21 156.02 3204.53 955.14 6779.68 00:13:54.896 00:13:54.896 [2024-11-17 14:23:43.732613] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:54.896 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:54.896 [2024-11-17 14:23:43.971322] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:00.169 Initializing NVMe Controllers 00:14:00.169 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:00.169 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:00.169 Initialization complete. Launching workers. 00:14:00.169 ======================================================== 00:14:00.169 Latency(us) 00:14:00.169 Device Information : IOPS MiB/s Average min max 00:14:00.169 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39836.93 155.61 3212.67 965.75 7172.42 00:14:00.169 ======================================================== 00:14:00.169 Total : 39836.93 155.61 3212.67 965.75 7172.42 00:14:00.169 00:14:00.169 [2024-11-17 14:23:48.988975] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:00.169 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:00.169 [2024-11-17 14:23:49.192407] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:05.437 [2024-11-17 14:23:54.336443] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:05.437 Initializing NVMe Controllers 00:14:05.437 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:05.437 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:05.437 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:05.437 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:05.437 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:05.437 Initialization complete. Launching workers. 00:14:05.437 Starting thread on core 2 00:14:05.437 Starting thread on core 3 00:14:05.437 Starting thread on core 1 00:14:05.437 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:05.437 [2024-11-17 14:23:54.630344] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:08.724 [2024-11-17 14:23:57.704882] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:08.724 Initializing NVMe Controllers 00:14:08.724 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:08.724 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:08.724 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:08.724 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:08.724 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:08.724 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:08.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:08.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:08.724 Initialization complete. Launching workers. 00:14:08.724 Starting thread on core 1 with urgent priority queue 00:14:08.724 Starting thread on core 2 with urgent priority queue 00:14:08.725 Starting thread on core 3 with urgent priority queue 00:14:08.725 Starting thread on core 0 with urgent priority queue 00:14:08.725 SPDK bdev Controller (SPDK2 ) core 0: 8778.67 IO/s 11.39 secs/100000 ios 00:14:08.725 SPDK bdev Controller (SPDK2 ) core 1: 7895.33 IO/s 12.67 secs/100000 ios 00:14:08.725 SPDK bdev Controller (SPDK2 ) core 2: 10108.67 IO/s 9.89 secs/100000 ios 00:14:08.725 SPDK bdev Controller (SPDK2 ) core 3: 11805.33 IO/s 8.47 secs/100000 ios 00:14:08.725 ======================================================== 00:14:08.725 00:14:08.725 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:08.983 [2024-11-17 14:23:57.994576] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:08.983 Initializing NVMe Controllers 00:14:08.983 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:08.983 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:08.983 Namespace ID: 1 size: 0GB 00:14:08.983 Initialization complete. 00:14:08.983 INFO: using host memory buffer for IO 00:14:08.983 Hello world! 00:14:08.983 [2024-11-17 14:23:58.006648] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:08.983 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:09.242 [2024-11-17 14:23:58.281408] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:10.176 Initializing NVMe Controllers 00:14:10.176 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.176 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.176 Initialization complete. Launching workers. 00:14:10.176 submit (in ns) avg, min, max = 8950.7, 3227.0, 4001318.3 00:14:10.176 complete (in ns) avg, min, max = 19251.5, 1767.0, 3999746.1 00:14:10.176 00:14:10.176 Submit histogram 00:14:10.176 ================ 00:14:10.176 Range in us Cumulative Count 00:14:10.176 3.214 - 3.228: 0.0062% ( 1) 00:14:10.176 3.228 - 3.242: 0.0186% ( 2) 00:14:10.176 3.242 - 3.256: 0.0248% ( 1) 00:14:10.176 3.256 - 3.270: 0.1428% ( 19) 00:14:10.176 3.270 - 3.283: 0.2112% ( 11) 00:14:10.176 3.283 - 3.297: 0.3292% ( 19) 00:14:10.176 3.297 - 3.311: 0.5714% ( 39) 00:14:10.176 3.311 - 3.325: 1.2484% ( 109) 00:14:10.176 3.325 - 3.339: 3.5836% ( 376) 00:14:10.176 3.339 - 3.353: 8.5212% ( 795) 00:14:10.176 3.353 - 3.367: 14.3283% ( 935) 00:14:10.176 3.367 - 3.381: 20.0981% ( 929) 00:14:10.176 3.381 - 3.395: 26.6257% ( 1051) 00:14:10.176 3.395 - 3.409: 32.3396% ( 920) 00:14:10.176 3.409 - 3.423: 36.9977% ( 750) 00:14:10.176 3.423 - 3.437: 42.1216% ( 825) 00:14:10.176 3.437 - 3.450: 47.0157% ( 788) 00:14:10.176 3.450 - 3.464: 51.0714% ( 653) 00:14:10.176 3.464 - 3.478: 55.3320% ( 686) 00:14:10.176 3.478 - 3.492: 61.3130% ( 963) 00:14:10.176 3.492 - 3.506: 67.9771% ( 1073) 00:14:10.176 3.506 - 3.520: 72.1508% ( 672) 00:14:10.176 3.520 - 3.534: 77.1132% ( 799) 00:14:10.176 3.534 - 3.548: 81.6347% ( 728) 00:14:10.176 3.548 - 3.562: 84.2743% ( 425) 00:14:10.176 3.562 - 3.590: 86.7834% ( 404) 00:14:10.176 3.590 - 3.617: 87.5598% ( 125) 00:14:10.176 3.617 - 3.645: 88.6591% ( 177) 00:14:10.176 3.645 - 3.673: 90.4043% ( 281) 00:14:10.176 3.673 - 3.701: 92.1558% ( 282) 00:14:10.176 3.701 - 3.729: 93.5842% ( 230) 00:14:10.176 3.729 - 3.757: 95.2674% ( 271) 00:14:10.176 3.757 - 3.784: 96.8573% ( 256) 00:14:10.176 3.784 - 3.812: 97.9939% ( 183) 00:14:10.176 3.812 - 3.840: 98.6647% ( 108) 00:14:10.176 3.840 - 3.868: 99.1367% ( 76) 00:14:10.176 3.868 - 3.896: 99.4348% ( 48) 00:14:10.176 3.896 - 3.923: 99.5404% ( 17) 00:14:10.176 3.923 - 3.951: 99.5528% ( 2) 00:14:10.176 3.979 - 4.007: 99.5590% ( 1) 00:14:10.176 5.287 - 5.315: 99.5652% ( 1) 00:14:10.176 5.315 - 5.343: 99.5715% ( 1) 00:14:10.176 5.398 - 5.426: 99.5777% ( 1) 00:14:10.176 5.482 - 5.510: 99.5839% ( 1) 00:14:10.176 5.510 - 5.537: 99.5901% ( 1) 00:14:10.176 5.760 - 5.788: 99.5963% ( 1) 00:14:10.176 5.871 - 5.899: 99.6087% ( 2) 00:14:10.176 6.317 - 6.344: 99.6149% ( 1) 00:14:10.176 6.372 - 6.400: 99.6211% ( 1) 00:14:10.176 6.539 - 6.567: 99.6274% ( 1) 00:14:10.176 6.650 - 6.678: 99.6336% ( 1) 00:14:10.176 6.762 - 6.790: 99.6398% ( 1) 00:14:10.176 6.845 - 6.873: 99.6460% ( 1) 00:14:10.176 6.873 - 6.901: 99.6584% ( 2) 00:14:10.176 6.901 - 6.929: 99.6646% ( 1) 00:14:10.176 6.929 - 6.957: 99.6708% ( 1) 00:14:10.176 7.123 - 7.179: 99.6832% ( 2) 00:14:10.176 7.179 - 7.235: 99.6957% ( 2) 00:14:10.176 7.235 - 7.290: 99.7019% ( 1) 00:14:10.176 7.290 - 7.346: 99.7081% ( 1) 00:14:10.176 7.346 - 7.402: 99.7143% ( 1) 00:14:10.176 7.513 - 7.569: 99.7205% ( 1) 00:14:10.176 7.569 - 7.624: 99.7267% ( 1) 00:14:10.176 7.736 - 7.791: 99.7329% ( 1) 00:14:10.176 7.791 - 7.847: 99.7454% ( 2) 00:14:10.176 7.903 - 7.958: 99.7516% ( 1) 00:14:10.176 7.958 - 8.014: 99.7702% ( 3) 00:14:10.176 8.014 - 8.070: 99.7764% ( 1) 00:14:10.176 8.070 - 8.125: 99.7826% ( 1) 00:14:10.176 8.237 - 8.292: 99.7888% ( 1) 00:14:10.176 8.348 - 8.403: 99.7950% ( 1) 00:14:10.176 8.459 - 8.515: 99.8013% ( 1) 00:14:10.176 8.626 - 8.682: 99.8137% ( 2) 00:14:10.176 8.737 - 8.793: 99.8199% ( 1) 00:14:10.176 8.793 - 8.849: 99.8323% ( 2) 00:14:10.176 8.960 - 9.016: 99.8385% ( 1) 00:14:10.176 9.071 - 9.127: 99.8447% ( 1) 00:14:10.176 11.743 - 11.798: 99.8509% ( 1) 00:14:10.176 12.911 - 12.967: 99.8572% ( 1) 00:14:10.176 [2024-11-17 14:23:59.377431] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:10.434 13.134 - 13.190: 99.8634% ( 1) 00:14:10.434 3989.148 - 4017.642: 100.0000% ( 22) 00:14:10.434 00:14:10.434 Complete histogram 00:14:10.434 ================== 00:14:10.434 Range in us Cumulative Count 00:14:10.434 1.767 - 1.774: 0.0124% ( 2) 00:14:10.434 1.774 - 1.781: 0.0186% ( 1) 00:14:10.434 1.781 - 1.795: 0.0373% ( 3) 00:14:10.434 1.795 - 1.809: 0.1801% ( 23) 00:14:10.434 1.809 - 1.823: 1.0310% ( 137) 00:14:10.434 1.823 - 1.837: 2.5837% ( 250) 00:14:10.434 1.837 - 1.850: 4.0060% ( 229) 00:14:10.435 1.850 - 1.864: 18.0672% ( 2264) 00:14:10.435 1.864 - 1.878: 74.3618% ( 9064) 00:14:10.435 1.878 - 1.892: 91.2676% ( 2722) 00:14:10.435 1.892 - 1.906: 95.4848% ( 679) 00:14:10.435 1.906 - 1.920: 96.6524% ( 188) 00:14:10.435 1.920 - 1.934: 97.3604% ( 114) 00:14:10.435 1.934 - 1.948: 98.3293% ( 156) 00:14:10.435 1.948 - 1.962: 98.9628% ( 102) 00:14:10.435 1.962 - 1.976: 99.2174% ( 41) 00:14:10.435 1.976 - 1.990: 99.2609% ( 7) 00:14:10.435 1.990 - 2.003: 99.2795% ( 3) 00:14:10.435 2.017 - 2.031: 99.2858% ( 1) 00:14:10.435 2.031 - 2.045: 99.2920% ( 1) 00:14:10.435 2.059 - 2.073: 99.2982% ( 1) 00:14:10.435 2.073 - 2.087: 99.3168% ( 3) 00:14:10.435 2.087 - 2.101: 99.3230% ( 1) 00:14:10.435 2.101 - 2.115: 99.3292% ( 1) 00:14:10.435 2.129 - 2.143: 99.3417% ( 2) 00:14:10.435 2.157 - 2.170: 99.3479% ( 1) 00:14:10.435 2.184 - 2.198: 99.3541% ( 1) 00:14:10.435 2.198 - 2.212: 99.3665% ( 2) 00:14:10.435 2.351 - 2.365: 99.3727% ( 1) 00:14:10.435 2.365 - 2.379: 99.3789% ( 1) 00:14:10.435 2.630 - 2.643: 99.3851% ( 1) 00:14:10.435 3.784 - 3.812: 99.3913% ( 1) 00:14:10.435 4.563 - 4.591: 99.3976% ( 1) 00:14:10.435 4.591 - 4.619: 99.4038% ( 1) 00:14:10.435 4.647 - 4.675: 99.4100% ( 1) 00:14:10.435 4.675 - 4.703: 99.4162% ( 1) 00:14:10.435 4.842 - 4.870: 99.4224% ( 1) 00:14:10.435 4.953 - 4.981: 99.4286% ( 1) 00:14:10.435 5.009 - 5.037: 99.4348% ( 1) 00:14:10.435 5.176 - 5.203: 99.4410% ( 1) 00:14:10.435 5.370 - 5.398: 99.4472% ( 1) 00:14:10.435 5.454 - 5.482: 99.4535% ( 1) 00:14:10.435 5.537 - 5.565: 99.4597% ( 1) 00:14:10.435 5.621 - 5.649: 99.4659% ( 1) 00:14:10.435 5.955 - 5.983: 99.4721% ( 1) 00:14:10.435 6.122 - 6.150: 99.4783% ( 1) 00:14:10.435 6.177 - 6.205: 99.4845% ( 1) 00:14:10.435 6.289 - 6.317: 99.4907% ( 1) 00:14:10.435 6.400 - 6.428: 99.4969% ( 1) 00:14:10.435 6.678 - 6.706: 99.5031% ( 1) 00:14:10.435 6.706 - 6.734: 99.5093% ( 1) 00:14:10.435 6.734 - 6.762: 99.5156% ( 1) 00:14:10.435 6.901 - 6.929: 99.5218% ( 1) 00:14:10.435 7.123 - 7.179: 99.5280% ( 1) 00:14:10.435 7.569 - 7.624: 99.5342% ( 1) 00:14:10.435 7.847 - 7.903: 99.5466% ( 2) 00:14:10.435 8.960 - 9.016: 99.5528% ( 1) 00:14:10.435 10.407 - 10.463: 99.5590% ( 1) 00:14:10.435 17.475 - 17.586: 99.5652% ( 1) 00:14:10.435 3989.148 - 4017.642: 100.0000% ( 70) 00:14:10.435 00:14:10.435 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:10.435 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:10.435 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:10.435 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:10.435 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:10.435 [ 00:14:10.435 { 00:14:10.435 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:10.435 "subtype": "Discovery", 00:14:10.435 "listen_addresses": [], 00:14:10.435 "allow_any_host": true, 00:14:10.435 "hosts": [] 00:14:10.435 }, 00:14:10.435 { 00:14:10.435 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:10.435 "subtype": "NVMe", 00:14:10.435 "listen_addresses": [ 00:14:10.435 { 00:14:10.435 "trtype": "VFIOUSER", 00:14:10.435 "adrfam": "IPv4", 00:14:10.435 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:10.435 "trsvcid": "0" 00:14:10.435 } 00:14:10.435 ], 00:14:10.435 "allow_any_host": true, 00:14:10.435 "hosts": [], 00:14:10.435 "serial_number": "SPDK1", 00:14:10.435 "model_number": "SPDK bdev Controller", 00:14:10.435 "max_namespaces": 32, 00:14:10.435 "min_cntlid": 1, 00:14:10.435 "max_cntlid": 65519, 00:14:10.435 "namespaces": [ 00:14:10.435 { 00:14:10.435 "nsid": 1, 00:14:10.435 "bdev_name": "Malloc1", 00:14:10.435 "name": "Malloc1", 00:14:10.435 "nguid": "62707CCB079740D4998B2DBBB7793631", 00:14:10.435 "uuid": "62707ccb-0797-40d4-998b-2dbbb7793631" 00:14:10.435 }, 00:14:10.435 { 00:14:10.435 "nsid": 2, 00:14:10.435 "bdev_name": "Malloc3", 00:14:10.435 "name": "Malloc3", 00:14:10.435 "nguid": "ABFC13933B804B829B67E2617FFD09E5", 00:14:10.435 "uuid": "abfc1393-3b80-4b82-9b67-e2617ffd09e5" 00:14:10.435 } 00:14:10.435 ] 00:14:10.435 }, 00:14:10.435 { 00:14:10.435 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:10.435 "subtype": "NVMe", 00:14:10.435 "listen_addresses": [ 00:14:10.435 { 00:14:10.435 "trtype": "VFIOUSER", 00:14:10.435 "adrfam": "IPv4", 00:14:10.435 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:10.435 "trsvcid": "0" 00:14:10.435 } 00:14:10.435 ], 00:14:10.435 "allow_any_host": true, 00:14:10.435 "hosts": [], 00:14:10.435 "serial_number": "SPDK2", 00:14:10.435 "model_number": "SPDK bdev Controller", 00:14:10.435 "max_namespaces": 32, 00:14:10.435 "min_cntlid": 1, 00:14:10.435 "max_cntlid": 65519, 00:14:10.435 "namespaces": [ 00:14:10.435 { 00:14:10.435 "nsid": 1, 00:14:10.435 "bdev_name": "Malloc2", 00:14:10.435 "name": "Malloc2", 00:14:10.435 "nguid": "9196083A0A694925A96DF430DFE122DA", 00:14:10.435 "uuid": "9196083a-0a69-4925-a96d-f430dfe122da" 00:14:10.435 } 00:14:10.435 ] 00:14:10.435 } 00:14:10.435 ] 00:14:10.435 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:10.435 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:10.435 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1431426 00:14:10.435 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:10.435 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:10.435 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:10.435 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:10.435 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:10.435 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:10.435 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:10.694 [2024-11-17 14:23:59.760804] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:10.694 Malloc4 00:14:10.694 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:10.952 [2024-11-17 14:24:00.018753] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:10.952 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:10.952 Asynchronous Event Request test 00:14:10.952 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.952 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.952 Registering asynchronous event callbacks... 00:14:10.952 Starting namespace attribute notice tests for all controllers... 00:14:10.952 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:10.952 aer_cb - Changed Namespace 00:14:10.952 Cleaning up... 00:14:11.211 [ 00:14:11.211 { 00:14:11.211 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:11.211 "subtype": "Discovery", 00:14:11.211 "listen_addresses": [], 00:14:11.211 "allow_any_host": true, 00:14:11.211 "hosts": [] 00:14:11.211 }, 00:14:11.211 { 00:14:11.211 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:11.211 "subtype": "NVMe", 00:14:11.211 "listen_addresses": [ 00:14:11.211 { 00:14:11.211 "trtype": "VFIOUSER", 00:14:11.211 "adrfam": "IPv4", 00:14:11.211 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:11.211 "trsvcid": "0" 00:14:11.211 } 00:14:11.211 ], 00:14:11.211 "allow_any_host": true, 00:14:11.211 "hosts": [], 00:14:11.211 "serial_number": "SPDK1", 00:14:11.211 "model_number": "SPDK bdev Controller", 00:14:11.211 "max_namespaces": 32, 00:14:11.211 "min_cntlid": 1, 00:14:11.211 "max_cntlid": 65519, 00:14:11.211 "namespaces": [ 00:14:11.211 { 00:14:11.211 "nsid": 1, 00:14:11.211 "bdev_name": "Malloc1", 00:14:11.211 "name": "Malloc1", 00:14:11.211 "nguid": "62707CCB079740D4998B2DBBB7793631", 00:14:11.211 "uuid": "62707ccb-0797-40d4-998b-2dbbb7793631" 00:14:11.211 }, 00:14:11.211 { 00:14:11.211 "nsid": 2, 00:14:11.211 "bdev_name": "Malloc3", 00:14:11.211 "name": "Malloc3", 00:14:11.211 "nguid": "ABFC13933B804B829B67E2617FFD09E5", 00:14:11.211 "uuid": "abfc1393-3b80-4b82-9b67-e2617ffd09e5" 00:14:11.211 } 00:14:11.211 ] 00:14:11.211 }, 00:14:11.211 { 00:14:11.211 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:11.211 "subtype": "NVMe", 00:14:11.211 "listen_addresses": [ 00:14:11.211 { 00:14:11.211 "trtype": "VFIOUSER", 00:14:11.211 "adrfam": "IPv4", 00:14:11.211 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:11.211 "trsvcid": "0" 00:14:11.211 } 00:14:11.211 ], 00:14:11.211 "allow_any_host": true, 00:14:11.211 "hosts": [], 00:14:11.211 "serial_number": "SPDK2", 00:14:11.211 "model_number": "SPDK bdev Controller", 00:14:11.211 "max_namespaces": 32, 00:14:11.211 "min_cntlid": 1, 00:14:11.211 "max_cntlid": 65519, 00:14:11.211 "namespaces": [ 00:14:11.211 { 00:14:11.211 "nsid": 1, 00:14:11.211 "bdev_name": "Malloc2", 00:14:11.211 "name": "Malloc2", 00:14:11.211 "nguid": "9196083A0A694925A96DF430DFE122DA", 00:14:11.211 "uuid": "9196083a-0a69-4925-a96d-f430dfe122da" 00:14:11.211 }, 00:14:11.211 { 00:14:11.211 "nsid": 2, 00:14:11.211 "bdev_name": "Malloc4", 00:14:11.211 "name": "Malloc4", 00:14:11.211 "nguid": "45A1CBBEA3C945A7A2F1A222B07A5FFE", 00:14:11.211 "uuid": "45a1cbbe-a3c9-45a7-a2f1-a222b07a5ffe" 00:14:11.211 } 00:14:11.211 ] 00:14:11.211 } 00:14:11.211 ] 00:14:11.211 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1431426 00:14:11.211 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:11.211 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1423805 00:14:11.211 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1423805 ']' 00:14:11.211 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1423805 00:14:11.211 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:11.211 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:11.211 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1423805 00:14:11.211 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:11.211 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:11.211 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1423805' 00:14:11.211 killing process with pid 1423805 00:14:11.211 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1423805 00:14:11.211 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1423805 00:14:11.470 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:11.470 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:11.470 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:11.470 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:11.470 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:11.470 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1431600 00:14:11.470 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1431600' 00:14:11.470 Process pid: 1431600 00:14:11.470 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:11.470 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:11.470 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1431600 00:14:11.470 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1431600 ']' 00:14:11.470 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.470 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:11.470 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.471 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:11.471 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:11.471 [2024-11-17 14:24:00.594589] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:11.471 [2024-11-17 14:24:00.595533] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:14:11.471 [2024-11-17 14:24:00.595576] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.471 [2024-11-17 14:24:00.675149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:11.730 [2024-11-17 14:24:00.722391] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.730 [2024-11-17 14:24:00.722428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.730 [2024-11-17 14:24:00.722436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.730 [2024-11-17 14:24:00.722442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.730 [2024-11-17 14:24:00.722447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.730 [2024-11-17 14:24:00.724055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.730 [2024-11-17 14:24:00.724164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.730 [2024-11-17 14:24:00.724205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.730 [2024-11-17 14:24:00.724205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.730 [2024-11-17 14:24:00.792200] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:11.730 [2024-11-17 14:24:00.793074] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:11.730 [2024-11-17 14:24:00.793238] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:11.730 [2024-11-17 14:24:00.793645] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:11.730 [2024-11-17 14:24:00.793690] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:12.299 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:12.299 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:12.299 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:13.237 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:13.496 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:13.496 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:13.496 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:13.496 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:13.496 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:13.756 Malloc1 00:14:13.756 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:14.016 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:14.275 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:14.534 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:14.534 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:14.534 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:14.534 Malloc2 00:14:14.534 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:14.793 14:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:15.051 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:15.311 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:15.311 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1431600 00:14:15.311 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1431600 ']' 00:14:15.311 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1431600 00:14:15.311 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:15.311 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:15.311 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1431600 00:14:15.311 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:15.311 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:15.311 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1431600' 00:14:15.311 killing process with pid 1431600 00:14:15.311 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1431600 00:14:15.311 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1431600 00:14:15.570 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:15.570 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:15.570 00:14:15.570 real 0m51.550s 00:14:15.570 user 3m16.999s 00:14:15.570 sys 0m3.380s 00:14:15.570 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.570 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:15.570 ************************************ 00:14:15.570 END TEST nvmf_vfio_user 00:14:15.570 ************************************ 00:14:15.570 14:24:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:15.570 14:24:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:15.570 14:24:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.570 14:24:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:15.570 ************************************ 00:14:15.570 START TEST nvmf_vfio_user_nvme_compliance 00:14:15.570 ************************************ 00:14:15.570 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:15.570 * Looking for test storage... 00:14:15.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:15.570 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:15.570 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:14:15.570 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:15.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.831 --rc genhtml_branch_coverage=1 00:14:15.831 --rc genhtml_function_coverage=1 00:14:15.831 --rc genhtml_legend=1 00:14:15.831 --rc geninfo_all_blocks=1 00:14:15.831 --rc geninfo_unexecuted_blocks=1 00:14:15.831 00:14:15.831 ' 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:15.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.831 --rc genhtml_branch_coverage=1 00:14:15.831 --rc genhtml_function_coverage=1 00:14:15.831 --rc genhtml_legend=1 00:14:15.831 --rc geninfo_all_blocks=1 00:14:15.831 --rc geninfo_unexecuted_blocks=1 00:14:15.831 00:14:15.831 ' 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:15.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.831 --rc genhtml_branch_coverage=1 00:14:15.831 --rc genhtml_function_coverage=1 00:14:15.831 --rc genhtml_legend=1 00:14:15.831 --rc geninfo_all_blocks=1 00:14:15.831 --rc geninfo_unexecuted_blocks=1 00:14:15.831 00:14:15.831 ' 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:15.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.831 --rc genhtml_branch_coverage=1 00:14:15.831 --rc genhtml_function_coverage=1 00:14:15.831 --rc genhtml_legend=1 00:14:15.831 --rc geninfo_all_blocks=1 00:14:15.831 --rc geninfo_unexecuted_blocks=1 00:14:15.831 00:14:15.831 ' 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.831 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:15.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1432520 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1432520' 00:14:15.832 Process pid: 1432520 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1432520 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1432520 ']' 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.832 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:15.832 [2024-11-17 14:24:04.921801] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:14:15.832 [2024-11-17 14:24:04.921852] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.832 [2024-11-17 14:24:04.994881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:15.832 [2024-11-17 14:24:05.036677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.832 [2024-11-17 14:24:05.036716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.832 [2024-11-17 14:24:05.036722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.832 [2024-11-17 14:24:05.036729] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.832 [2024-11-17 14:24:05.036734] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.832 [2024-11-17 14:24:05.038129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.832 [2024-11-17 14:24:05.038242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.832 [2024-11-17 14:24:05.038243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.091 14:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.091 14:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:16.091 14:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:17.028 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:17.028 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:17.028 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:17.028 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.028 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.028 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.028 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:17.028 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:17.028 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.028 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.028 malloc0 00:14:17.028 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.028 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:17.028 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.028 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.029 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.029 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:17.029 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.029 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.029 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.029 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:17.029 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.029 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.029 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.029 14:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:17.288 00:14:17.288 00:14:17.288 CUnit - A unit testing framework for C - Version 2.1-3 00:14:17.288 http://cunit.sourceforge.net/ 00:14:17.288 00:14:17.288 00:14:17.288 Suite: nvme_compliance 00:14:17.288 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-17 14:24:06.362873] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.288 [2024-11-17 14:24:06.364219] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:17.288 [2024-11-17 14:24:06.364234] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:17.288 [2024-11-17 14:24:06.364240] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:17.288 [2024-11-17 14:24:06.365896] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.288 passed 00:14:17.288 Test: admin_identify_ctrlr_verify_fused ...[2024-11-17 14:24:06.444462] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.288 [2024-11-17 14:24:06.447480] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.288 passed 00:14:17.546 Test: admin_identify_ns ...[2024-11-17 14:24:06.526523] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.546 [2024-11-17 14:24:06.586364] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:17.546 [2024-11-17 14:24:06.594362] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:17.546 [2024-11-17 14:24:06.615457] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.546 passed 00:14:17.546 Test: admin_get_features_mandatory_features ...[2024-11-17 14:24:06.687772] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.547 [2024-11-17 14:24:06.690795] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.547 passed 00:14:17.806 Test: admin_get_features_optional_features ...[2024-11-17 14:24:06.770318] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.806 [2024-11-17 14:24:06.773348] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.806 passed 00:14:17.806 Test: admin_set_features_number_of_queues ...[2024-11-17 14:24:06.851735] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.806 [2024-11-17 14:24:06.957444] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.806 passed 00:14:18.065 Test: admin_get_log_page_mandatory_logs ...[2024-11-17 14:24:07.031580] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.065 [2024-11-17 14:24:07.034608] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.065 passed 00:14:18.065 Test: admin_get_log_page_with_lpo ...[2024-11-17 14:24:07.112403] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.065 [2024-11-17 14:24:07.181361] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:18.065 [2024-11-17 14:24:07.194415] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.065 passed 00:14:18.065 Test: fabric_property_get ...[2024-11-17 14:24:07.271276] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.065 [2024-11-17 14:24:07.272527] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:18.065 [2024-11-17 14:24:07.274294] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.323 passed 00:14:18.323 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-17 14:24:07.352805] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.323 [2024-11-17 14:24:07.354048] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:18.323 [2024-11-17 14:24:07.355827] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.323 passed 00:14:18.323 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-17 14:24:07.434642] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.323 [2024-11-17 14:24:07.519361] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:18.323 [2024-11-17 14:24:07.535357] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:18.323 [2024-11-17 14:24:07.540443] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.582 passed 00:14:18.582 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-17 14:24:07.612595] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.582 [2024-11-17 14:24:07.613849] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:18.582 [2024-11-17 14:24:07.617635] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.582 passed 00:14:18.582 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-17 14:24:07.693497] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.582 [2024-11-17 14:24:07.770364] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:18.582 [2024-11-17 14:24:07.794358] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:18.582 [2024-11-17 14:24:07.799442] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.841 passed 00:14:18.841 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-17 14:24:07.876260] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.841 [2024-11-17 14:24:07.877499] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:18.841 [2024-11-17 14:24:07.877525] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:18.841 [2024-11-17 14:24:07.879284] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.841 passed 00:14:18.841 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-17 14:24:07.957166] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.841 [2024-11-17 14:24:08.049362] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:18.841 [2024-11-17 14:24:08.057371] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:19.100 [2024-11-17 14:24:08.065362] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:19.100 [2024-11-17 14:24:08.073360] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:19.100 [2024-11-17 14:24:08.102434] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.100 passed 00:14:19.100 Test: admin_create_io_sq_verify_pc ...[2024-11-17 14:24:08.177309] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.100 [2024-11-17 14:24:08.192365] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:19.100 [2024-11-17 14:24:08.209496] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.100 passed 00:14:19.100 Test: admin_create_io_qp_max_qps ...[2024-11-17 14:24:08.289061] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:20.475 [2024-11-17 14:24:09.383363] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:20.734 [2024-11-17 14:24:09.780279] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:20.734 passed 00:14:20.734 Test: admin_create_io_sq_shared_cq ...[2024-11-17 14:24:09.856366] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:20.993 [2024-11-17 14:24:09.990368] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:20.993 [2024-11-17 14:24:10.026429] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:20.993 passed 00:14:20.993 00:14:20.993 Run Summary: Type Total Ran Passed Failed Inactive 00:14:20.993 suites 1 1 n/a 0 0 00:14:20.993 tests 18 18 18 0 0 00:14:20.993 asserts 360 360 360 0 n/a 00:14:20.993 00:14:20.993 Elapsed time = 1.509 seconds 00:14:20.993 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1432520 00:14:20.993 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1432520 ']' 00:14:20.993 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1432520 00:14:20.993 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:20.993 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:20.993 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1432520 00:14:20.993 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:20.993 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:20.993 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1432520' 00:14:20.993 killing process with pid 1432520 00:14:20.993 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1432520 00:14:20.993 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1432520 00:14:21.251 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:21.251 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:21.251 00:14:21.251 real 0m5.648s 00:14:21.251 user 0m15.792s 00:14:21.251 sys 0m0.513s 00:14:21.251 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.251 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:21.251 ************************************ 00:14:21.251 END TEST nvmf_vfio_user_nvme_compliance 00:14:21.251 ************************************ 00:14:21.251 14:24:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:21.251 14:24:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:21.251 14:24:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.251 14:24:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:21.251 ************************************ 00:14:21.251 START TEST nvmf_vfio_user_fuzz 00:14:21.251 ************************************ 00:14:21.251 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:21.251 * Looking for test storage... 00:14:21.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.251 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:21.251 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:14:21.251 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:21.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.511 --rc genhtml_branch_coverage=1 00:14:21.511 --rc genhtml_function_coverage=1 00:14:21.511 --rc genhtml_legend=1 00:14:21.511 --rc geninfo_all_blocks=1 00:14:21.511 --rc geninfo_unexecuted_blocks=1 00:14:21.511 00:14:21.511 ' 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:21.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.511 --rc genhtml_branch_coverage=1 00:14:21.511 --rc genhtml_function_coverage=1 00:14:21.511 --rc genhtml_legend=1 00:14:21.511 --rc geninfo_all_blocks=1 00:14:21.511 --rc geninfo_unexecuted_blocks=1 00:14:21.511 00:14:21.511 ' 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:21.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.511 --rc genhtml_branch_coverage=1 00:14:21.511 --rc genhtml_function_coverage=1 00:14:21.511 --rc genhtml_legend=1 00:14:21.511 --rc geninfo_all_blocks=1 00:14:21.511 --rc geninfo_unexecuted_blocks=1 00:14:21.511 00:14:21.511 ' 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:21.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.511 --rc genhtml_branch_coverage=1 00:14:21.511 --rc genhtml_function_coverage=1 00:14:21.511 --rc genhtml_legend=1 00:14:21.511 --rc geninfo_all_blocks=1 00:14:21.511 --rc geninfo_unexecuted_blocks=1 00:14:21.511 00:14:21.511 ' 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.511 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:21.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1433936 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1433936' 00:14:21.512 Process pid: 1433936 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1433936 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1433936 ']' 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.512 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:21.771 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:21.771 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:21.771 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:22.708 malloc0 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:22.708 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:54.788 Fuzzing completed. Shutting down the fuzz application 00:14:54.788 00:14:54.788 Dumping successful admin opcodes: 00:14:54.788 8, 9, 10, 24, 00:14:54.788 Dumping successful io opcodes: 00:14:54.788 0, 00:14:54.788 NS: 0x20000081ef00 I/O qp, Total commands completed: 1052376, total successful commands: 4161, random_seed: 2390075456 00:14:54.788 NS: 0x20000081ef00 admin qp, Total commands completed: 261274, total successful commands: 2102, random_seed: 1424733696 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1433936 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1433936 ']' 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1433936 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1433936 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1433936' 00:14:54.788 killing process with pid 1433936 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1433936 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1433936 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:54.788 00:14:54.788 real 0m32.215s 00:14:54.788 user 0m30.776s 00:14:54.788 sys 0m31.197s 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:54.788 ************************************ 00:14:54.788 END TEST nvmf_vfio_user_fuzz 00:14:54.788 ************************************ 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:54.788 ************************************ 00:14:54.788 START TEST nvmf_auth_target 00:14:54.788 ************************************ 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:54.788 * Looking for test storage... 00:14:54.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:54.788 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:54.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.789 --rc genhtml_branch_coverage=1 00:14:54.789 --rc genhtml_function_coverage=1 00:14:54.789 --rc genhtml_legend=1 00:14:54.789 --rc geninfo_all_blocks=1 00:14:54.789 --rc geninfo_unexecuted_blocks=1 00:14:54.789 00:14:54.789 ' 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:54.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.789 --rc genhtml_branch_coverage=1 00:14:54.789 --rc genhtml_function_coverage=1 00:14:54.789 --rc genhtml_legend=1 00:14:54.789 --rc geninfo_all_blocks=1 00:14:54.789 --rc geninfo_unexecuted_blocks=1 00:14:54.789 00:14:54.789 ' 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:54.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.789 --rc genhtml_branch_coverage=1 00:14:54.789 --rc genhtml_function_coverage=1 00:14:54.789 --rc genhtml_legend=1 00:14:54.789 --rc geninfo_all_blocks=1 00:14:54.789 --rc geninfo_unexecuted_blocks=1 00:14:54.789 00:14:54.789 ' 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:54.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.789 --rc genhtml_branch_coverage=1 00:14:54.789 --rc genhtml_function_coverage=1 00:14:54.789 --rc genhtml_legend=1 00:14:54.789 --rc geninfo_all_blocks=1 00:14:54.789 --rc geninfo_unexecuted_blocks=1 00:14:54.789 00:14:54.789 ' 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:54.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:54.789 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:54.790 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:54.790 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:54.790 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:54.790 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:54.790 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:54.790 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:54.790 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:54.790 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:54.790 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.790 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:54.790 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:54.790 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:54.790 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.790 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.790 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.790 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:54.790 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:54.790 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:54.790 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.069 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:00.069 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:00.070 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:00.070 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:00.070 Found net devices under 0000:86:00.0: cvl_0_0 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:00.070 Found net devices under 0000:86:00.1: cvl_0_1 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:00.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:15:00.070 00:15:00.070 --- 10.0.0.2 ping statistics --- 00:15:00.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.070 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:15:00.070 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:00.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:15:00.071 00:15:00.071 --- 10.0.0.1 ping statistics --- 00:15:00.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.071 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1442236 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1442236 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1442236 ']' 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:00.071 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1442261 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=534801b33863612ee0317a91919f5a016625eaeaf0efec0f 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.K72 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 534801b33863612ee0317a91919f5a016625eaeaf0efec0f 0 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 534801b33863612ee0317a91919f5a016625eaeaf0efec0f 0 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=534801b33863612ee0317a91919f5a016625eaeaf0efec0f 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.K72 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.K72 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.K72 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=af38256817d4e41458afa0cbe70eac08316090728cca8492956ec83c313375ea 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.PiG 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key af38256817d4e41458afa0cbe70eac08316090728cca8492956ec83c313375ea 3 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 af38256817d4e41458afa0cbe70eac08316090728cca8492956ec83c313375ea 3 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=af38256817d4e41458afa0cbe70eac08316090728cca8492956ec83c313375ea 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.PiG 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.PiG 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.PiG 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=78cba9eeda2c9826568e9e0028519282 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.cDt 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 78cba9eeda2c9826568e9e0028519282 1 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 78cba9eeda2c9826568e9e0028519282 1 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=78cba9eeda2c9826568e9e0028519282 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:00.071 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:00.330 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.cDt 00:15:00.330 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.cDt 00:15:00.330 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.cDt 00:15:00.330 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:00.330 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:00.330 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:00.330 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:00.330 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:00.330 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:00.330 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:00.330 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=82bbed06a982b09d11683a99ac1636c70459757ecffbe684 00:15:00.330 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:00.330 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.cf4 00:15:00.330 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 82bbed06a982b09d11683a99ac1636c70459757ecffbe684 2 00:15:00.330 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 82bbed06a982b09d11683a99ac1636c70459757ecffbe684 2 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=82bbed06a982b09d11683a99ac1636c70459757ecffbe684 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.cf4 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.cf4 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.cf4 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5eeb3b05afa7e06eadbb839a7cb61246747049f3db3c3f90 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.jr8 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5eeb3b05afa7e06eadbb839a7cb61246747049f3db3c3f90 2 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5eeb3b05afa7e06eadbb839a7cb61246747049f3db3c3f90 2 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5eeb3b05afa7e06eadbb839a7cb61246747049f3db3c3f90 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.jr8 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.jr8 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.jr8 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=da44b2f82a6c47fd4ebe9d170c26f1b3 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.T9k 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key da44b2f82a6c47fd4ebe9d170c26f1b3 1 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 da44b2f82a6c47fd4ebe9d170c26f1b3 1 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=da44b2f82a6c47fd4ebe9d170c26f1b3 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.T9k 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.T9k 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.T9k 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c492c1cad5d579fb07ff5ea0a11717d66c125c54f119c507c2c040c51ff1a305 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.E5I 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c492c1cad5d579fb07ff5ea0a11717d66c125c54f119c507c2c040c51ff1a305 3 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c492c1cad5d579fb07ff5ea0a11717d66c125c54f119c507c2c040c51ff1a305 3 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c492c1cad5d579fb07ff5ea0a11717d66c125c54f119c507c2c040c51ff1a305 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.E5I 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.E5I 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.E5I 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1442236 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1442236 ']' 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:00.331 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.590 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.590 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:00.590 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1442261 /var/tmp/host.sock 00:15:00.590 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1442261 ']' 00:15:00.590 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:00.590 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:00.590 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:00.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:00.590 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:00.590 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.850 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.850 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:00.850 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:00.850 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.850 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.850 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.850 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:00.850 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.K72 00:15:00.850 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.850 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.850 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.850 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.K72 00:15:00.850 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.K72 00:15:01.110 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.PiG ]] 00:15:01.110 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PiG 00:15:01.110 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.110 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.110 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.110 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PiG 00:15:01.110 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PiG 00:15:01.369 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:01.369 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.cDt 00:15:01.369 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.369 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.369 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.369 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.cDt 00:15:01.369 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.cDt 00:15:01.628 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.cf4 ]] 00:15:01.628 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cf4 00:15:01.628 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.628 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.628 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.628 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cf4 00:15:01.628 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cf4 00:15:01.628 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:01.628 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.jr8 00:15:01.628 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.628 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.628 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.628 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.jr8 00:15:01.628 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.jr8 00:15:01.887 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.T9k ]] 00:15:01.887 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.T9k 00:15:01.887 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.887 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.887 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.887 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.T9k 00:15:01.887 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.T9k 00:15:02.146 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:02.146 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.E5I 00:15:02.146 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.146 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.146 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.146 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.E5I 00:15:02.146 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.E5I 00:15:02.405 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:02.405 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:02.405 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:02.405 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.405 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:02.405 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:02.405 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:02.405 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.405 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:02.405 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:02.405 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:02.405 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.405 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.405 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.405 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.405 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.405 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.405 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.405 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.665 00:15:02.665 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.665 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.665 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.924 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.924 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.924 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.924 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.924 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.924 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.924 { 00:15:02.924 "cntlid": 1, 00:15:02.924 "qid": 0, 00:15:02.924 "state": "enabled", 00:15:02.924 "thread": "nvmf_tgt_poll_group_000", 00:15:02.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:02.924 "listen_address": { 00:15:02.924 "trtype": "TCP", 00:15:02.924 "adrfam": "IPv4", 00:15:02.924 "traddr": "10.0.0.2", 00:15:02.924 "trsvcid": "4420" 00:15:02.924 }, 00:15:02.924 "peer_address": { 00:15:02.924 "trtype": "TCP", 00:15:02.924 "adrfam": "IPv4", 00:15:02.924 "traddr": "10.0.0.1", 00:15:02.924 "trsvcid": "45782" 00:15:02.924 }, 00:15:02.924 "auth": { 00:15:02.924 "state": "completed", 00:15:02.924 "digest": "sha256", 00:15:02.924 "dhgroup": "null" 00:15:02.924 } 00:15:02.924 } 00:15:02.924 ]' 00:15:02.924 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.924 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.924 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.184 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:03.184 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.184 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.184 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.184 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.184 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:15:03.184 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:15:03.752 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.012 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:04.012 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.012 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.012 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.012 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.012 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:04.012 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:04.012 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:04.012 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.012 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:04.012 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:04.012 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:04.012 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.012 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.012 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.012 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.012 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.012 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.012 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.012 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.271 00:15:04.271 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.271 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.271 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.531 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.531 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.531 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.531 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.531 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.531 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.531 { 00:15:04.531 "cntlid": 3, 00:15:04.531 "qid": 0, 00:15:04.531 "state": "enabled", 00:15:04.531 "thread": "nvmf_tgt_poll_group_000", 00:15:04.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:04.531 "listen_address": { 00:15:04.531 "trtype": "TCP", 00:15:04.531 "adrfam": "IPv4", 00:15:04.531 "traddr": "10.0.0.2", 00:15:04.531 "trsvcid": "4420" 00:15:04.531 }, 00:15:04.531 "peer_address": { 00:15:04.531 "trtype": "TCP", 00:15:04.531 "adrfam": "IPv4", 00:15:04.531 "traddr": "10.0.0.1", 00:15:04.531 "trsvcid": "34614" 00:15:04.531 }, 00:15:04.531 "auth": { 00:15:04.531 "state": "completed", 00:15:04.531 "digest": "sha256", 00:15:04.531 "dhgroup": "null" 00:15:04.531 } 00:15:04.531 } 00:15:04.531 ]' 00:15:04.531 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.531 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.531 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.790 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:04.790 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.790 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.790 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.790 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.049 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:15:05.049 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:15:05.618 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.618 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:05.618 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.618 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.618 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.618 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.618 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:05.618 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:05.618 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:05.618 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.618 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:05.618 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:05.618 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:05.618 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.618 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.618 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.618 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.618 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.618 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.618 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.618 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.878 00:15:05.878 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.878 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.878 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.137 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.137 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.137 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.137 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.137 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.137 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.137 { 00:15:06.137 "cntlid": 5, 00:15:06.137 "qid": 0, 00:15:06.137 "state": "enabled", 00:15:06.137 "thread": "nvmf_tgt_poll_group_000", 00:15:06.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:06.137 "listen_address": { 00:15:06.137 "trtype": "TCP", 00:15:06.137 "adrfam": "IPv4", 00:15:06.137 "traddr": "10.0.0.2", 00:15:06.137 "trsvcid": "4420" 00:15:06.137 }, 00:15:06.137 "peer_address": { 00:15:06.137 "trtype": "TCP", 00:15:06.137 "adrfam": "IPv4", 00:15:06.137 "traddr": "10.0.0.1", 00:15:06.137 "trsvcid": "34636" 00:15:06.137 }, 00:15:06.137 "auth": { 00:15:06.137 "state": "completed", 00:15:06.137 "digest": "sha256", 00:15:06.137 "dhgroup": "null" 00:15:06.137 } 00:15:06.137 } 00:15:06.137 ]' 00:15:06.137 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.137 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.137 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.396 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:06.396 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.397 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.397 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.397 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.397 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:15:06.397 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:15:06.964 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.223 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:07.223 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.223 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.223 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.223 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.223 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:07.223 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:07.223 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:07.223 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.223 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:07.223 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:07.223 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:07.223 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.223 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:07.223 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.223 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.223 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.223 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:07.223 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:07.223 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:07.482 00:15:07.482 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.482 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.482 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.741 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.741 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.741 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.741 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.741 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.741 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.741 { 00:15:07.741 "cntlid": 7, 00:15:07.741 "qid": 0, 00:15:07.741 "state": "enabled", 00:15:07.741 "thread": "nvmf_tgt_poll_group_000", 00:15:07.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:07.741 "listen_address": { 00:15:07.741 "trtype": "TCP", 00:15:07.741 "adrfam": "IPv4", 00:15:07.741 "traddr": "10.0.0.2", 00:15:07.741 "trsvcid": "4420" 00:15:07.741 }, 00:15:07.741 "peer_address": { 00:15:07.741 "trtype": "TCP", 00:15:07.741 "adrfam": "IPv4", 00:15:07.741 "traddr": "10.0.0.1", 00:15:07.741 "trsvcid": "34664" 00:15:07.741 }, 00:15:07.741 "auth": { 00:15:07.741 "state": "completed", 00:15:07.741 "digest": "sha256", 00:15:07.741 "dhgroup": "null" 00:15:07.741 } 00:15:07.741 } 00:15:07.741 ]' 00:15:07.741 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.741 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.741 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.741 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:07.741 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.001 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.001 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.001 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.001 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:15:08.001 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:15:08.569 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.569 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:08.569 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.569 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.569 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.569 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.569 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.569 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:08.569 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:08.829 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:08.829 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.829 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:08.829 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:08.829 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:08.829 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.829 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.829 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.829 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.829 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.829 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.829 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.829 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.087 00:15:09.087 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.087 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.087 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.346 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.346 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.346 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.346 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.346 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.346 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.346 { 00:15:09.346 "cntlid": 9, 00:15:09.346 "qid": 0, 00:15:09.346 "state": "enabled", 00:15:09.346 "thread": "nvmf_tgt_poll_group_000", 00:15:09.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:09.346 "listen_address": { 00:15:09.346 "trtype": "TCP", 00:15:09.346 "adrfam": "IPv4", 00:15:09.346 "traddr": "10.0.0.2", 00:15:09.346 "trsvcid": "4420" 00:15:09.346 }, 00:15:09.346 "peer_address": { 00:15:09.346 "trtype": "TCP", 00:15:09.346 "adrfam": "IPv4", 00:15:09.346 "traddr": "10.0.0.1", 00:15:09.346 "trsvcid": "34682" 00:15:09.346 }, 00:15:09.346 "auth": { 00:15:09.346 "state": "completed", 00:15:09.346 "digest": "sha256", 00:15:09.346 "dhgroup": "ffdhe2048" 00:15:09.346 } 00:15:09.346 } 00:15:09.346 ]' 00:15:09.346 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.346 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.347 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.347 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:09.347 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.606 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.606 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.606 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.606 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:15:09.606 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:15:10.174 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.174 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:10.174 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.174 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.174 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.174 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.174 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:10.174 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:10.434 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:10.434 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.434 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:10.434 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:10.434 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:10.434 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.434 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.434 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.434 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.434 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.434 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.434 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.434 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.693 00:15:10.693 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.693 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.693 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.953 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.953 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.953 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.953 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.953 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.953 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.953 { 00:15:10.953 "cntlid": 11, 00:15:10.953 "qid": 0, 00:15:10.953 "state": "enabled", 00:15:10.953 "thread": "nvmf_tgt_poll_group_000", 00:15:10.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:10.953 "listen_address": { 00:15:10.953 "trtype": "TCP", 00:15:10.953 "adrfam": "IPv4", 00:15:10.953 "traddr": "10.0.0.2", 00:15:10.953 "trsvcid": "4420" 00:15:10.953 }, 00:15:10.953 "peer_address": { 00:15:10.953 "trtype": "TCP", 00:15:10.953 "adrfam": "IPv4", 00:15:10.953 "traddr": "10.0.0.1", 00:15:10.953 "trsvcid": "34706" 00:15:10.953 }, 00:15:10.953 "auth": { 00:15:10.953 "state": "completed", 00:15:10.953 "digest": "sha256", 00:15:10.953 "dhgroup": "ffdhe2048" 00:15:10.953 } 00:15:10.953 } 00:15:10.953 ]' 00:15:10.953 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.953 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:10.953 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.953 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:10.953 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.953 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.953 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.953 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.212 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:15:11.212 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:15:11.780 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.781 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:11.781 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.781 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.781 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.781 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.781 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:11.781 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:12.040 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:12.040 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.040 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:12.040 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:12.040 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:12.040 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.040 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.040 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.040 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.040 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.040 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.040 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.040 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.299 00:15:12.299 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.299 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.299 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.558 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.558 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.558 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.558 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.558 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.558 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.558 { 00:15:12.558 "cntlid": 13, 00:15:12.558 "qid": 0, 00:15:12.558 "state": "enabled", 00:15:12.558 "thread": "nvmf_tgt_poll_group_000", 00:15:12.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:12.558 "listen_address": { 00:15:12.558 "trtype": "TCP", 00:15:12.558 "adrfam": "IPv4", 00:15:12.558 "traddr": "10.0.0.2", 00:15:12.558 "trsvcid": "4420" 00:15:12.558 }, 00:15:12.558 "peer_address": { 00:15:12.558 "trtype": "TCP", 00:15:12.558 "adrfam": "IPv4", 00:15:12.558 "traddr": "10.0.0.1", 00:15:12.558 "trsvcid": "34730" 00:15:12.558 }, 00:15:12.558 "auth": { 00:15:12.558 "state": "completed", 00:15:12.558 "digest": "sha256", 00:15:12.558 "dhgroup": "ffdhe2048" 00:15:12.558 } 00:15:12.558 } 00:15:12.558 ]' 00:15:12.558 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.558 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:12.558 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.558 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:12.558 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.558 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.558 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.558 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.817 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:15:12.817 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:15:13.385 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.385 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:13.385 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.385 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.385 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.385 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.385 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:13.385 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:13.644 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:13.644 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.644 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:13.644 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:13.644 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:13.644 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.644 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:13.644 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.644 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.644 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.644 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:13.644 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.644 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.903 00:15:13.903 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.903 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.903 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.162 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.162 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.162 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.162 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.162 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.162 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.162 { 00:15:14.162 "cntlid": 15, 00:15:14.162 "qid": 0, 00:15:14.162 "state": "enabled", 00:15:14.163 "thread": "nvmf_tgt_poll_group_000", 00:15:14.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:14.163 "listen_address": { 00:15:14.163 "trtype": "TCP", 00:15:14.163 "adrfam": "IPv4", 00:15:14.163 "traddr": "10.0.0.2", 00:15:14.163 "trsvcid": "4420" 00:15:14.163 }, 00:15:14.163 "peer_address": { 00:15:14.163 "trtype": "TCP", 00:15:14.163 "adrfam": "IPv4", 00:15:14.163 "traddr": "10.0.0.1", 00:15:14.163 "trsvcid": "34748" 00:15:14.163 }, 00:15:14.163 "auth": { 00:15:14.163 "state": "completed", 00:15:14.163 "digest": "sha256", 00:15:14.163 "dhgroup": "ffdhe2048" 00:15:14.163 } 00:15:14.163 } 00:15:14.163 ]' 00:15:14.163 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.163 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.163 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.163 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:14.163 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.163 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.163 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.163 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.421 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:15:14.422 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:15:14.990 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.990 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:14.990 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.990 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.990 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.990 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:14.990 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.990 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:14.990 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:15.250 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:15.250 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.250 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:15.250 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:15.250 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:15.250 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.250 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.250 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.250 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.250 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.250 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.250 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.250 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.509 00:15:15.509 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.509 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.509 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.769 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.769 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.769 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.769 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.769 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.769 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.769 { 00:15:15.769 "cntlid": 17, 00:15:15.769 "qid": 0, 00:15:15.769 "state": "enabled", 00:15:15.769 "thread": "nvmf_tgt_poll_group_000", 00:15:15.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:15.769 "listen_address": { 00:15:15.769 "trtype": "TCP", 00:15:15.769 "adrfam": "IPv4", 00:15:15.769 "traddr": "10.0.0.2", 00:15:15.769 "trsvcid": "4420" 00:15:15.769 }, 00:15:15.769 "peer_address": { 00:15:15.769 "trtype": "TCP", 00:15:15.769 "adrfam": "IPv4", 00:15:15.769 "traddr": "10.0.0.1", 00:15:15.769 "trsvcid": "59824" 00:15:15.769 }, 00:15:15.769 "auth": { 00:15:15.769 "state": "completed", 00:15:15.769 "digest": "sha256", 00:15:15.769 "dhgroup": "ffdhe3072" 00:15:15.769 } 00:15:15.769 } 00:15:15.769 ]' 00:15:15.769 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.769 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.769 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.769 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:15.769 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.769 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.769 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.769 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.128 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:15:16.128 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:15:16.822 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.822 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:16.822 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.822 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.822 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.822 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.822 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:16.822 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:16.822 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:16.822 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.822 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:16.822 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:16.822 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:16.822 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.822 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.822 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.822 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.822 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.822 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.822 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.822 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.108 00:15:17.108 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.108 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.108 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.367 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.368 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.368 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.368 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.368 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.368 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.368 { 00:15:17.368 "cntlid": 19, 00:15:17.368 "qid": 0, 00:15:17.368 "state": "enabled", 00:15:17.368 "thread": "nvmf_tgt_poll_group_000", 00:15:17.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:17.368 "listen_address": { 00:15:17.368 "trtype": "TCP", 00:15:17.368 "adrfam": "IPv4", 00:15:17.368 "traddr": "10.0.0.2", 00:15:17.368 "trsvcid": "4420" 00:15:17.368 }, 00:15:17.368 "peer_address": { 00:15:17.368 "trtype": "TCP", 00:15:17.368 "adrfam": "IPv4", 00:15:17.368 "traddr": "10.0.0.1", 00:15:17.368 "trsvcid": "59854" 00:15:17.368 }, 00:15:17.368 "auth": { 00:15:17.368 "state": "completed", 00:15:17.368 "digest": "sha256", 00:15:17.368 "dhgroup": "ffdhe3072" 00:15:17.368 } 00:15:17.368 } 00:15:17.368 ]' 00:15:17.368 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.368 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.368 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.368 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:17.368 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.368 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.368 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.368 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.627 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:15:17.627 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:15:18.195 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.195 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:18.195 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.195 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.195 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.195 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.195 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:18.195 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:18.454 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:18.454 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.454 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:18.454 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:18.454 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:18.454 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.455 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.455 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.455 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.455 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.455 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.455 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.455 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.713 00:15:18.713 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.713 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.713 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.972 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.972 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.972 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.972 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.972 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.973 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.973 { 00:15:18.973 "cntlid": 21, 00:15:18.973 "qid": 0, 00:15:18.973 "state": "enabled", 00:15:18.973 "thread": "nvmf_tgt_poll_group_000", 00:15:18.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:18.973 "listen_address": { 00:15:18.973 "trtype": "TCP", 00:15:18.973 "adrfam": "IPv4", 00:15:18.973 "traddr": "10.0.0.2", 00:15:18.973 "trsvcid": "4420" 00:15:18.973 }, 00:15:18.973 "peer_address": { 00:15:18.973 "trtype": "TCP", 00:15:18.973 "adrfam": "IPv4", 00:15:18.973 "traddr": "10.0.0.1", 00:15:18.973 "trsvcid": "59876" 00:15:18.973 }, 00:15:18.973 "auth": { 00:15:18.973 "state": "completed", 00:15:18.973 "digest": "sha256", 00:15:18.973 "dhgroup": "ffdhe3072" 00:15:18.973 } 00:15:18.973 } 00:15:18.973 ]' 00:15:18.973 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.973 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.973 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.973 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:18.973 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.973 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.973 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.973 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.232 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:15:19.232 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:15:19.800 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.800 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:19.800 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.800 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.800 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.800 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.800 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:19.800 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:20.059 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:20.059 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.059 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:20.059 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:20.059 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:20.059 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.059 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:20.059 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.059 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.059 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.059 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:20.059 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.059 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.318 00:15:20.318 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.318 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.318 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.576 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.576 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.576 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.576 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.576 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.576 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.576 { 00:15:20.576 "cntlid": 23, 00:15:20.576 "qid": 0, 00:15:20.576 "state": "enabled", 00:15:20.576 "thread": "nvmf_tgt_poll_group_000", 00:15:20.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:20.576 "listen_address": { 00:15:20.576 "trtype": "TCP", 00:15:20.576 "adrfam": "IPv4", 00:15:20.576 "traddr": "10.0.0.2", 00:15:20.576 "trsvcid": "4420" 00:15:20.576 }, 00:15:20.576 "peer_address": { 00:15:20.576 "trtype": "TCP", 00:15:20.576 "adrfam": "IPv4", 00:15:20.576 "traddr": "10.0.0.1", 00:15:20.576 "trsvcid": "59896" 00:15:20.576 }, 00:15:20.576 "auth": { 00:15:20.576 "state": "completed", 00:15:20.576 "digest": "sha256", 00:15:20.576 "dhgroup": "ffdhe3072" 00:15:20.576 } 00:15:20.576 } 00:15:20.576 ]' 00:15:20.576 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.576 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.576 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.576 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:20.576 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.576 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.576 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.576 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.835 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:15:20.835 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:15:21.404 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.404 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:21.404 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.404 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.404 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.404 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:21.404 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.404 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:21.404 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:21.663 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:21.663 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.663 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.663 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:21.663 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:21.663 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.663 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.663 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.663 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.663 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.663 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.663 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.663 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.922 00:15:21.923 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.923 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.923 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.182 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.182 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.182 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.182 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.182 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.182 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.182 { 00:15:22.182 "cntlid": 25, 00:15:22.182 "qid": 0, 00:15:22.182 "state": "enabled", 00:15:22.182 "thread": "nvmf_tgt_poll_group_000", 00:15:22.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:22.182 "listen_address": { 00:15:22.182 "trtype": "TCP", 00:15:22.182 "adrfam": "IPv4", 00:15:22.182 "traddr": "10.0.0.2", 00:15:22.182 "trsvcid": "4420" 00:15:22.182 }, 00:15:22.182 "peer_address": { 00:15:22.182 "trtype": "TCP", 00:15:22.182 "adrfam": "IPv4", 00:15:22.182 "traddr": "10.0.0.1", 00:15:22.182 "trsvcid": "59934" 00:15:22.182 }, 00:15:22.182 "auth": { 00:15:22.182 "state": "completed", 00:15:22.182 "digest": "sha256", 00:15:22.182 "dhgroup": "ffdhe4096" 00:15:22.182 } 00:15:22.182 } 00:15:22.182 ]' 00:15:22.182 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.182 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.182 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.182 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:22.182 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.182 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.182 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.182 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.441 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:15:22.441 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:15:23.009 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.009 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:23.009 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.009 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.009 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.009 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.009 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:23.009 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:23.268 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:23.268 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.268 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.268 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:23.268 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:23.268 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.268 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.268 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.268 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.268 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.268 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.268 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.268 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.527 00:15:23.527 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.527 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.527 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.786 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.786 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.786 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.786 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.786 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.786 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.786 { 00:15:23.786 "cntlid": 27, 00:15:23.786 "qid": 0, 00:15:23.786 "state": "enabled", 00:15:23.786 "thread": "nvmf_tgt_poll_group_000", 00:15:23.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:23.786 "listen_address": { 00:15:23.786 "trtype": "TCP", 00:15:23.786 "adrfam": "IPv4", 00:15:23.786 "traddr": "10.0.0.2", 00:15:23.786 "trsvcid": "4420" 00:15:23.786 }, 00:15:23.786 "peer_address": { 00:15:23.786 "trtype": "TCP", 00:15:23.786 "adrfam": "IPv4", 00:15:23.786 "traddr": "10.0.0.1", 00:15:23.786 "trsvcid": "59966" 00:15:23.786 }, 00:15:23.786 "auth": { 00:15:23.786 "state": "completed", 00:15:23.786 "digest": "sha256", 00:15:23.786 "dhgroup": "ffdhe4096" 00:15:23.786 } 00:15:23.786 } 00:15:23.786 ]' 00:15:23.786 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.786 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:23.786 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.786 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:23.786 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.786 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.786 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.786 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.044 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:15:24.044 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:15:24.612 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.612 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:24.612 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.612 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.612 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.612 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.612 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:24.612 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:24.871 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:24.871 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.871 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:24.871 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:24.871 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:24.871 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.871 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.871 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.871 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.871 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.871 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.871 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.871 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.130 00:15:25.130 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.130 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.130 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.389 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.389 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.389 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.389 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.389 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.389 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.389 { 00:15:25.389 "cntlid": 29, 00:15:25.389 "qid": 0, 00:15:25.389 "state": "enabled", 00:15:25.389 "thread": "nvmf_tgt_poll_group_000", 00:15:25.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:25.389 "listen_address": { 00:15:25.389 "trtype": "TCP", 00:15:25.389 "adrfam": "IPv4", 00:15:25.389 "traddr": "10.0.0.2", 00:15:25.389 "trsvcid": "4420" 00:15:25.389 }, 00:15:25.389 "peer_address": { 00:15:25.389 "trtype": "TCP", 00:15:25.389 "adrfam": "IPv4", 00:15:25.389 "traddr": "10.0.0.1", 00:15:25.389 "trsvcid": "39396" 00:15:25.389 }, 00:15:25.389 "auth": { 00:15:25.389 "state": "completed", 00:15:25.389 "digest": "sha256", 00:15:25.389 "dhgroup": "ffdhe4096" 00:15:25.389 } 00:15:25.389 } 00:15:25.389 ]' 00:15:25.389 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.389 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:25.389 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.389 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:25.389 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.648 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.648 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.648 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.648 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:15:25.648 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:15:26.216 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.216 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:26.216 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.216 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.216 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.216 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.475 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:26.475 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:26.475 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:26.475 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.475 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:26.475 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:26.475 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:26.475 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.475 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:26.475 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.475 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.475 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.475 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:26.475 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:26.475 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:26.734 00:15:26.734 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.734 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.734 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.993 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.993 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.993 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.993 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.993 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.993 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.993 { 00:15:26.993 "cntlid": 31, 00:15:26.993 "qid": 0, 00:15:26.993 "state": "enabled", 00:15:26.993 "thread": "nvmf_tgt_poll_group_000", 00:15:26.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:26.993 "listen_address": { 00:15:26.993 "trtype": "TCP", 00:15:26.993 "adrfam": "IPv4", 00:15:26.993 "traddr": "10.0.0.2", 00:15:26.993 "trsvcid": "4420" 00:15:26.993 }, 00:15:26.993 "peer_address": { 00:15:26.993 "trtype": "TCP", 00:15:26.993 "adrfam": "IPv4", 00:15:26.993 "traddr": "10.0.0.1", 00:15:26.993 "trsvcid": "39422" 00:15:26.993 }, 00:15:26.993 "auth": { 00:15:26.993 "state": "completed", 00:15:26.993 "digest": "sha256", 00:15:26.993 "dhgroup": "ffdhe4096" 00:15:26.993 } 00:15:26.993 } 00:15:26.993 ]' 00:15:26.993 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.993 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.993 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.252 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:27.252 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.252 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.252 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.252 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.511 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:15:27.511 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.080 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.648 00:15:28.648 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.648 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.648 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.648 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.648 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.648 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.648 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.648 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.648 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.648 { 00:15:28.648 "cntlid": 33, 00:15:28.648 "qid": 0, 00:15:28.648 "state": "enabled", 00:15:28.648 "thread": "nvmf_tgt_poll_group_000", 00:15:28.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:28.648 "listen_address": { 00:15:28.648 "trtype": "TCP", 00:15:28.648 "adrfam": "IPv4", 00:15:28.648 "traddr": "10.0.0.2", 00:15:28.648 "trsvcid": "4420" 00:15:28.648 }, 00:15:28.648 "peer_address": { 00:15:28.648 "trtype": "TCP", 00:15:28.648 "adrfam": "IPv4", 00:15:28.648 "traddr": "10.0.0.1", 00:15:28.648 "trsvcid": "39436" 00:15:28.648 }, 00:15:28.648 "auth": { 00:15:28.648 "state": "completed", 00:15:28.648 "digest": "sha256", 00:15:28.648 "dhgroup": "ffdhe6144" 00:15:28.648 } 00:15:28.648 } 00:15:28.648 ]' 00:15:28.648 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.907 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.907 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.907 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:28.907 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.907 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.907 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.907 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.166 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:15:29.166 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:15:29.734 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.734 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:29.734 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.734 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.734 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.734 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.734 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:29.734 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:29.993 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:29.993 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.993 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:29.993 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:29.993 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:29.993 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.993 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.993 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.993 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.993 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.993 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.993 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.993 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.251 00:15:30.251 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.251 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.251 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.510 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.510 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.510 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.510 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.510 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.510 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.510 { 00:15:30.510 "cntlid": 35, 00:15:30.510 "qid": 0, 00:15:30.510 "state": "enabled", 00:15:30.510 "thread": "nvmf_tgt_poll_group_000", 00:15:30.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:30.510 "listen_address": { 00:15:30.510 "trtype": "TCP", 00:15:30.510 "adrfam": "IPv4", 00:15:30.510 "traddr": "10.0.0.2", 00:15:30.510 "trsvcid": "4420" 00:15:30.510 }, 00:15:30.510 "peer_address": { 00:15:30.510 "trtype": "TCP", 00:15:30.510 "adrfam": "IPv4", 00:15:30.510 "traddr": "10.0.0.1", 00:15:30.510 "trsvcid": "39460" 00:15:30.510 }, 00:15:30.510 "auth": { 00:15:30.510 "state": "completed", 00:15:30.510 "digest": "sha256", 00:15:30.510 "dhgroup": "ffdhe6144" 00:15:30.510 } 00:15:30.510 } 00:15:30.510 ]' 00:15:30.510 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.510 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.510 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.510 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:30.510 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.510 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.510 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.510 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.769 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:15:30.769 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:15:31.338 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.338 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:31.338 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.338 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.338 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.338 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.338 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:31.338 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:31.597 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:31.597 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.597 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.597 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:31.597 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:31.597 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.597 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.597 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.597 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.597 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.597 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.597 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.597 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.856 00:15:32.116 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.116 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.116 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.116 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.116 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.116 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.116 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.116 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.116 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.116 { 00:15:32.116 "cntlid": 37, 00:15:32.116 "qid": 0, 00:15:32.116 "state": "enabled", 00:15:32.116 "thread": "nvmf_tgt_poll_group_000", 00:15:32.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:32.116 "listen_address": { 00:15:32.116 "trtype": "TCP", 00:15:32.116 "adrfam": "IPv4", 00:15:32.116 "traddr": "10.0.0.2", 00:15:32.116 "trsvcid": "4420" 00:15:32.116 }, 00:15:32.116 "peer_address": { 00:15:32.116 "trtype": "TCP", 00:15:32.116 "adrfam": "IPv4", 00:15:32.116 "traddr": "10.0.0.1", 00:15:32.116 "trsvcid": "39486" 00:15:32.116 }, 00:15:32.116 "auth": { 00:15:32.116 "state": "completed", 00:15:32.116 "digest": "sha256", 00:15:32.116 "dhgroup": "ffdhe6144" 00:15:32.116 } 00:15:32.116 } 00:15:32.116 ]' 00:15:32.116 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.116 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.116 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.375 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:32.375 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.375 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.375 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.375 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.634 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:15:32.634 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:15:33.203 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.203 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:33.203 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.203 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.203 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.203 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.203 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:33.203 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:33.203 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:33.203 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.203 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:33.203 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:33.203 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:33.203 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.203 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:33.203 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.203 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.203 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.203 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:33.203 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:33.203 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:33.772 00:15:33.772 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.772 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.772 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.772 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.772 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.772 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.772 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.772 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.772 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.772 { 00:15:33.772 "cntlid": 39, 00:15:33.772 "qid": 0, 00:15:33.772 "state": "enabled", 00:15:33.772 "thread": "nvmf_tgt_poll_group_000", 00:15:33.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:33.772 "listen_address": { 00:15:33.772 "trtype": "TCP", 00:15:33.772 "adrfam": "IPv4", 00:15:33.772 "traddr": "10.0.0.2", 00:15:33.772 "trsvcid": "4420" 00:15:33.772 }, 00:15:33.772 "peer_address": { 00:15:33.772 "trtype": "TCP", 00:15:33.772 "adrfam": "IPv4", 00:15:33.772 "traddr": "10.0.0.1", 00:15:33.772 "trsvcid": "39518" 00:15:33.772 }, 00:15:33.772 "auth": { 00:15:33.772 "state": "completed", 00:15:33.772 "digest": "sha256", 00:15:33.772 "dhgroup": "ffdhe6144" 00:15:33.772 } 00:15:33.772 } 00:15:33.772 ]' 00:15:33.772 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.031 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.031 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.031 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:34.031 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.031 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.031 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.031 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.290 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:15:34.291 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:15:34.859 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.859 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:34.859 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.859 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.859 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.859 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:34.859 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.859 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:34.859 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:35.118 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:35.118 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.118 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:35.118 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:35.118 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:35.118 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.118 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.118 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.118 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.118 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.118 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.118 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.118 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.376 00:15:35.636 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.636 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.636 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.636 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.636 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.636 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.636 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.636 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.636 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.636 { 00:15:35.636 "cntlid": 41, 00:15:35.636 "qid": 0, 00:15:35.636 "state": "enabled", 00:15:35.636 "thread": "nvmf_tgt_poll_group_000", 00:15:35.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:35.636 "listen_address": { 00:15:35.636 "trtype": "TCP", 00:15:35.636 "adrfam": "IPv4", 00:15:35.636 "traddr": "10.0.0.2", 00:15:35.636 "trsvcid": "4420" 00:15:35.636 }, 00:15:35.636 "peer_address": { 00:15:35.636 "trtype": "TCP", 00:15:35.636 "adrfam": "IPv4", 00:15:35.636 "traddr": "10.0.0.1", 00:15:35.636 "trsvcid": "59474" 00:15:35.636 }, 00:15:35.636 "auth": { 00:15:35.636 "state": "completed", 00:15:35.636 "digest": "sha256", 00:15:35.636 "dhgroup": "ffdhe8192" 00:15:35.636 } 00:15:35.636 } 00:15:35.636 ]' 00:15:35.636 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.895 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.895 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.895 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:35.895 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.895 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.895 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.895 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.154 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:15:36.154 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:15:36.721 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.721 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:36.721 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.721 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.721 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.721 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.721 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:36.721 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:36.979 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:36.980 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.980 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:36.980 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:36.980 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:36.980 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.980 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.980 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.980 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.980 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.980 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.980 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.980 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.547 00:15:37.547 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.547 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.547 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.547 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.547 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.547 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.547 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.547 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.547 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.547 { 00:15:37.547 "cntlid": 43, 00:15:37.547 "qid": 0, 00:15:37.547 "state": "enabled", 00:15:37.547 "thread": "nvmf_tgt_poll_group_000", 00:15:37.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:37.547 "listen_address": { 00:15:37.547 "trtype": "TCP", 00:15:37.547 "adrfam": "IPv4", 00:15:37.547 "traddr": "10.0.0.2", 00:15:37.547 "trsvcid": "4420" 00:15:37.547 }, 00:15:37.547 "peer_address": { 00:15:37.547 "trtype": "TCP", 00:15:37.547 "adrfam": "IPv4", 00:15:37.547 "traddr": "10.0.0.1", 00:15:37.547 "trsvcid": "59516" 00:15:37.547 }, 00:15:37.547 "auth": { 00:15:37.547 "state": "completed", 00:15:37.547 "digest": "sha256", 00:15:37.547 "dhgroup": "ffdhe8192" 00:15:37.547 } 00:15:37.547 } 00:15:37.547 ]' 00:15:37.547 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.547 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.547 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.806 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:37.806 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.806 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.806 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.806 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.806 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:15:37.806 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:15:38.372 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.631 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.631 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.631 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.631 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.631 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.631 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:38.631 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:38.631 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:38.631 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.631 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:38.631 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:38.631 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:38.631 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.631 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.631 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.631 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.631 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.631 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.631 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.631 14:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.198 00:15:39.198 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.198 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.198 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.457 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.457 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.457 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.457 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.457 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.457 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.457 { 00:15:39.457 "cntlid": 45, 00:15:39.457 "qid": 0, 00:15:39.457 "state": "enabled", 00:15:39.457 "thread": "nvmf_tgt_poll_group_000", 00:15:39.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:39.457 "listen_address": { 00:15:39.457 "trtype": "TCP", 00:15:39.457 "adrfam": "IPv4", 00:15:39.457 "traddr": "10.0.0.2", 00:15:39.457 "trsvcid": "4420" 00:15:39.457 }, 00:15:39.457 "peer_address": { 00:15:39.457 "trtype": "TCP", 00:15:39.457 "adrfam": "IPv4", 00:15:39.457 "traddr": "10.0.0.1", 00:15:39.457 "trsvcid": "59542" 00:15:39.457 }, 00:15:39.457 "auth": { 00:15:39.457 "state": "completed", 00:15:39.457 "digest": "sha256", 00:15:39.457 "dhgroup": "ffdhe8192" 00:15:39.457 } 00:15:39.457 } 00:15:39.457 ]' 00:15:39.457 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.457 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.457 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.457 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:39.457 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.457 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.457 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.457 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.716 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:15:39.716 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:15:40.284 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.284 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.284 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.284 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.284 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.284 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.284 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:40.284 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:40.542 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:40.542 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.542 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:40.542 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:40.542 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:40.542 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.542 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:40.542 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.542 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.542 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.542 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:40.542 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.542 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.109 00:15:41.109 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.109 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.109 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.367 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.367 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.367 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.367 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.367 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.367 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.367 { 00:15:41.367 "cntlid": 47, 00:15:41.367 "qid": 0, 00:15:41.367 "state": "enabled", 00:15:41.367 "thread": "nvmf_tgt_poll_group_000", 00:15:41.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:41.367 "listen_address": { 00:15:41.367 "trtype": "TCP", 00:15:41.367 "adrfam": "IPv4", 00:15:41.367 "traddr": "10.0.0.2", 00:15:41.367 "trsvcid": "4420" 00:15:41.367 }, 00:15:41.367 "peer_address": { 00:15:41.367 "trtype": "TCP", 00:15:41.367 "adrfam": "IPv4", 00:15:41.367 "traddr": "10.0.0.1", 00:15:41.367 "trsvcid": "59576" 00:15:41.367 }, 00:15:41.367 "auth": { 00:15:41.367 "state": "completed", 00:15:41.367 "digest": "sha256", 00:15:41.367 "dhgroup": "ffdhe8192" 00:15:41.367 } 00:15:41.367 } 00:15:41.367 ]' 00:15:41.367 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.367 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:41.367 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.367 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:41.367 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.367 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.367 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.367 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.626 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:15:41.626 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:15:42.194 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.194 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:42.194 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.194 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.194 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.194 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:42.194 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.194 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.194 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:42.194 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:42.453 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:42.453 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.453 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:42.453 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:42.453 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:42.453 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.453 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.453 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.453 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.453 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.453 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.453 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.453 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.712 00:15:42.712 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.712 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.712 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.970 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.970 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.970 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.970 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.970 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.970 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.970 { 00:15:42.970 "cntlid": 49, 00:15:42.970 "qid": 0, 00:15:42.970 "state": "enabled", 00:15:42.970 "thread": "nvmf_tgt_poll_group_000", 00:15:42.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:42.970 "listen_address": { 00:15:42.970 "trtype": "TCP", 00:15:42.970 "adrfam": "IPv4", 00:15:42.970 "traddr": "10.0.0.2", 00:15:42.970 "trsvcid": "4420" 00:15:42.970 }, 00:15:42.970 "peer_address": { 00:15:42.970 "trtype": "TCP", 00:15:42.970 "adrfam": "IPv4", 00:15:42.970 "traddr": "10.0.0.1", 00:15:42.970 "trsvcid": "59612" 00:15:42.970 }, 00:15:42.970 "auth": { 00:15:42.970 "state": "completed", 00:15:42.970 "digest": "sha384", 00:15:42.970 "dhgroup": "null" 00:15:42.970 } 00:15:42.970 } 00:15:42.970 ]' 00:15:42.970 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.970 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.970 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.970 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:42.970 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.970 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.970 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.970 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.229 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:15:43.229 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:15:43.796 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.796 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:43.796 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.796 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.796 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.796 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.796 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:43.796 14:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:44.055 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:44.055 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.055 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:44.055 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:44.055 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:44.055 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.055 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.055 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.055 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.055 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.055 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.055 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.055 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.314 00:15:44.314 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.314 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.314 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.314 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.314 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.314 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.314 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.314 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.314 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.314 { 00:15:44.314 "cntlid": 51, 00:15:44.314 "qid": 0, 00:15:44.314 "state": "enabled", 00:15:44.314 "thread": "nvmf_tgt_poll_group_000", 00:15:44.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:44.314 "listen_address": { 00:15:44.314 "trtype": "TCP", 00:15:44.314 "adrfam": "IPv4", 00:15:44.314 "traddr": "10.0.0.2", 00:15:44.314 "trsvcid": "4420" 00:15:44.314 }, 00:15:44.314 "peer_address": { 00:15:44.314 "trtype": "TCP", 00:15:44.314 "adrfam": "IPv4", 00:15:44.314 "traddr": "10.0.0.1", 00:15:44.314 "trsvcid": "59652" 00:15:44.314 }, 00:15:44.314 "auth": { 00:15:44.314 "state": "completed", 00:15:44.314 "digest": "sha384", 00:15:44.314 "dhgroup": "null" 00:15:44.314 } 00:15:44.314 } 00:15:44.314 ]' 00:15:44.314 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.572 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.572 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.572 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:44.572 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.572 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.572 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.572 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.830 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:15:44.831 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:15:45.397 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.397 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.397 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.397 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.397 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.397 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.397 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:45.397 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:45.656 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:45.656 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.656 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:45.656 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:45.656 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:45.656 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.656 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.656 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.656 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.656 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.656 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.656 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.656 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.656 00:15:45.915 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.915 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.915 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.915 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.915 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.915 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.915 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.915 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.915 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.915 { 00:15:45.915 "cntlid": 53, 00:15:45.915 "qid": 0, 00:15:45.915 "state": "enabled", 00:15:45.915 "thread": "nvmf_tgt_poll_group_000", 00:15:45.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:45.915 "listen_address": { 00:15:45.915 "trtype": "TCP", 00:15:45.915 "adrfam": "IPv4", 00:15:45.915 "traddr": "10.0.0.2", 00:15:45.915 "trsvcid": "4420" 00:15:45.915 }, 00:15:45.915 "peer_address": { 00:15:45.915 "trtype": "TCP", 00:15:45.915 "adrfam": "IPv4", 00:15:45.915 "traddr": "10.0.0.1", 00:15:45.915 "trsvcid": "36394" 00:15:45.915 }, 00:15:45.915 "auth": { 00:15:45.915 "state": "completed", 00:15:45.915 "digest": "sha384", 00:15:45.915 "dhgroup": "null" 00:15:45.915 } 00:15:45.915 } 00:15:45.915 ]' 00:15:45.915 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.174 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.174 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.174 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:46.174 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.174 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.174 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.174 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.433 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:15:46.433 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:15:47.001 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.001 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:47.001 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.001 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.001 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.001 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.001 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:47.001 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:47.001 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:47.001 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.001 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:47.001 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:47.001 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:47.001 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.001 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:47.001 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.001 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.001 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.001 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:47.001 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.001 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.260 00:15:47.518 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.518 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.518 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.518 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.518 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.518 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.518 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.518 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.518 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.518 { 00:15:47.518 "cntlid": 55, 00:15:47.518 "qid": 0, 00:15:47.518 "state": "enabled", 00:15:47.518 "thread": "nvmf_tgt_poll_group_000", 00:15:47.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:47.518 "listen_address": { 00:15:47.518 "trtype": "TCP", 00:15:47.518 "adrfam": "IPv4", 00:15:47.518 "traddr": "10.0.0.2", 00:15:47.518 "trsvcid": "4420" 00:15:47.518 }, 00:15:47.518 "peer_address": { 00:15:47.519 "trtype": "TCP", 00:15:47.519 "adrfam": "IPv4", 00:15:47.519 "traddr": "10.0.0.1", 00:15:47.519 "trsvcid": "36414" 00:15:47.519 }, 00:15:47.519 "auth": { 00:15:47.519 "state": "completed", 00:15:47.519 "digest": "sha384", 00:15:47.519 "dhgroup": "null" 00:15:47.519 } 00:15:47.519 } 00:15:47.519 ]' 00:15:47.519 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.519 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.519 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.777 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:47.777 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.777 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.777 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.777 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.036 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:15:48.036 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.603 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.862 00:15:48.862 14:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.862 14:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.862 14:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.128 14:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.128 14:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.128 14:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.128 14:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.128 14:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.128 14:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.128 { 00:15:49.128 "cntlid": 57, 00:15:49.128 "qid": 0, 00:15:49.128 "state": "enabled", 00:15:49.128 "thread": "nvmf_tgt_poll_group_000", 00:15:49.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:49.128 "listen_address": { 00:15:49.128 "trtype": "TCP", 00:15:49.128 "adrfam": "IPv4", 00:15:49.128 "traddr": "10.0.0.2", 00:15:49.128 "trsvcid": "4420" 00:15:49.128 }, 00:15:49.128 "peer_address": { 00:15:49.128 "trtype": "TCP", 00:15:49.128 "adrfam": "IPv4", 00:15:49.128 "traddr": "10.0.0.1", 00:15:49.128 "trsvcid": "36452" 00:15:49.128 }, 00:15:49.128 "auth": { 00:15:49.128 "state": "completed", 00:15:49.128 "digest": "sha384", 00:15:49.128 "dhgroup": "ffdhe2048" 00:15:49.128 } 00:15:49.128 } 00:15:49.128 ]' 00:15:49.128 14:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.128 14:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.128 14:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.391 14:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:49.391 14:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.391 14:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.391 14:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.391 14:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.650 14:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:15:49.650 14:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:15:50.217 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.217 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.217 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.217 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.217 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.217 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.217 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:50.217 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:50.217 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:50.217 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.217 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:50.217 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:50.217 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:50.217 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.217 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.217 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.217 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.217 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.217 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.217 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.217 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.475 00:15:50.734 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.734 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.734 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.734 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.734 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.734 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.734 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.734 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.734 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.734 { 00:15:50.734 "cntlid": 59, 00:15:50.734 "qid": 0, 00:15:50.734 "state": "enabled", 00:15:50.734 "thread": "nvmf_tgt_poll_group_000", 00:15:50.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:50.734 "listen_address": { 00:15:50.734 "trtype": "TCP", 00:15:50.734 "adrfam": "IPv4", 00:15:50.734 "traddr": "10.0.0.2", 00:15:50.734 "trsvcid": "4420" 00:15:50.734 }, 00:15:50.734 "peer_address": { 00:15:50.734 "trtype": "TCP", 00:15:50.734 "adrfam": "IPv4", 00:15:50.734 "traddr": "10.0.0.1", 00:15:50.734 "trsvcid": "36470" 00:15:50.734 }, 00:15:50.734 "auth": { 00:15:50.734 "state": "completed", 00:15:50.734 "digest": "sha384", 00:15:50.734 "dhgroup": "ffdhe2048" 00:15:50.734 } 00:15:50.734 } 00:15:50.734 ]' 00:15:50.734 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.734 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.993 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.993 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:50.993 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.993 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.993 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.993 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.251 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:15:51.251 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:15:51.817 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.817 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:51.817 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.817 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.817 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.818 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.818 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:51.818 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:52.077 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:52.077 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.077 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:52.077 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:52.077 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:52.077 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.077 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.077 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.077 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.077 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.077 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.077 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.077 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.077 00:15:52.336 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.336 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.336 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.336 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.336 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.336 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.336 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.336 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.336 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.336 { 00:15:52.336 "cntlid": 61, 00:15:52.336 "qid": 0, 00:15:52.336 "state": "enabled", 00:15:52.336 "thread": "nvmf_tgt_poll_group_000", 00:15:52.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:52.336 "listen_address": { 00:15:52.336 "trtype": "TCP", 00:15:52.336 "adrfam": "IPv4", 00:15:52.336 "traddr": "10.0.0.2", 00:15:52.336 "trsvcid": "4420" 00:15:52.336 }, 00:15:52.336 "peer_address": { 00:15:52.336 "trtype": "TCP", 00:15:52.336 "adrfam": "IPv4", 00:15:52.336 "traddr": "10.0.0.1", 00:15:52.336 "trsvcid": "36492" 00:15:52.336 }, 00:15:52.336 "auth": { 00:15:52.336 "state": "completed", 00:15:52.336 "digest": "sha384", 00:15:52.336 "dhgroup": "ffdhe2048" 00:15:52.336 } 00:15:52.336 } 00:15:52.336 ]' 00:15:52.336 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.595 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.595 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.595 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:52.595 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.595 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.595 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.595 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.853 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:15:52.853 14:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:15:53.420 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.420 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.421 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.421 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.421 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.421 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.421 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:53.421 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:53.679 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:53.679 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.679 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.679 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:53.679 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:53.679 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.679 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:53.679 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.679 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.679 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.679 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:53.679 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:53.679 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:53.939 00:15:53.939 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.939 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.939 14:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.939 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.939 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.939 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.939 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.939 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.198 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.198 { 00:15:54.198 "cntlid": 63, 00:15:54.198 "qid": 0, 00:15:54.198 "state": "enabled", 00:15:54.198 "thread": "nvmf_tgt_poll_group_000", 00:15:54.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:54.198 "listen_address": { 00:15:54.198 "trtype": "TCP", 00:15:54.198 "adrfam": "IPv4", 00:15:54.198 "traddr": "10.0.0.2", 00:15:54.198 "trsvcid": "4420" 00:15:54.198 }, 00:15:54.198 "peer_address": { 00:15:54.198 "trtype": "TCP", 00:15:54.198 "adrfam": "IPv4", 00:15:54.198 "traddr": "10.0.0.1", 00:15:54.198 "trsvcid": "36498" 00:15:54.198 }, 00:15:54.198 "auth": { 00:15:54.198 "state": "completed", 00:15:54.198 "digest": "sha384", 00:15:54.198 "dhgroup": "ffdhe2048" 00:15:54.198 } 00:15:54.198 } 00:15:54.198 ]' 00:15:54.198 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.198 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.198 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.198 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:54.198 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.198 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.198 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.198 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.457 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:15:54.457 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:15:55.025 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.025 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:55.025 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.025 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.025 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.025 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:55.025 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.025 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:55.025 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:55.284 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:55.284 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.284 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:55.284 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:55.284 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:55.284 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.284 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.284 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.284 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.284 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.284 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.284 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.284 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.544 00:15:55.544 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.544 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.544 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.544 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.544 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.544 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.544 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.544 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.544 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.544 { 00:15:55.544 "cntlid": 65, 00:15:55.544 "qid": 0, 00:15:55.544 "state": "enabled", 00:15:55.544 "thread": "nvmf_tgt_poll_group_000", 00:15:55.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:55.544 "listen_address": { 00:15:55.544 "trtype": "TCP", 00:15:55.544 "adrfam": "IPv4", 00:15:55.544 "traddr": "10.0.0.2", 00:15:55.544 "trsvcid": "4420" 00:15:55.544 }, 00:15:55.544 "peer_address": { 00:15:55.544 "trtype": "TCP", 00:15:55.544 "adrfam": "IPv4", 00:15:55.544 "traddr": "10.0.0.1", 00:15:55.544 "trsvcid": "34250" 00:15:55.544 }, 00:15:55.544 "auth": { 00:15:55.544 "state": "completed", 00:15:55.544 "digest": "sha384", 00:15:55.544 "dhgroup": "ffdhe3072" 00:15:55.544 } 00:15:55.544 } 00:15:55.544 ]' 00:15:55.544 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.803 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.803 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.803 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:55.803 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.803 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.803 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.803 14:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.061 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:15:56.061 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:15:56.629 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.629 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:56.629 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.629 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.629 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.629 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.629 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:56.629 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:56.629 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:56.629 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.629 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.629 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:56.629 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:56.629 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.629 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.629 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.629 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.629 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.887 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.887 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.887 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.887 00:15:57.146 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.146 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.146 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.146 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.146 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.146 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.146 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.146 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.146 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.146 { 00:15:57.146 "cntlid": 67, 00:15:57.146 "qid": 0, 00:15:57.146 "state": "enabled", 00:15:57.146 "thread": "nvmf_tgt_poll_group_000", 00:15:57.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:57.146 "listen_address": { 00:15:57.146 "trtype": "TCP", 00:15:57.146 "adrfam": "IPv4", 00:15:57.146 "traddr": "10.0.0.2", 00:15:57.146 "trsvcid": "4420" 00:15:57.146 }, 00:15:57.146 "peer_address": { 00:15:57.146 "trtype": "TCP", 00:15:57.146 "adrfam": "IPv4", 00:15:57.146 "traddr": "10.0.0.1", 00:15:57.146 "trsvcid": "34274" 00:15:57.146 }, 00:15:57.146 "auth": { 00:15:57.146 "state": "completed", 00:15:57.146 "digest": "sha384", 00:15:57.146 "dhgroup": "ffdhe3072" 00:15:57.146 } 00:15:57.146 } 00:15:57.146 ]' 00:15:57.146 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.405 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.405 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.405 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:57.405 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.405 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.405 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.405 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.664 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:15:57.664 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:15:58.232 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.232 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:58.232 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.232 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.232 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.232 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.232 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:58.232 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:58.491 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:58.491 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.491 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.491 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:58.491 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:58.491 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.491 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.491 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.491 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.491 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.491 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.491 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.491 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.750 00:15:58.750 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.750 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.750 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.750 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.750 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.750 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.750 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.750 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.750 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.750 { 00:15:58.750 "cntlid": 69, 00:15:58.750 "qid": 0, 00:15:58.750 "state": "enabled", 00:15:58.750 "thread": "nvmf_tgt_poll_group_000", 00:15:58.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:58.750 "listen_address": { 00:15:58.750 "trtype": "TCP", 00:15:58.750 "adrfam": "IPv4", 00:15:58.750 "traddr": "10.0.0.2", 00:15:58.750 "trsvcid": "4420" 00:15:58.750 }, 00:15:58.750 "peer_address": { 00:15:58.750 "trtype": "TCP", 00:15:58.750 "adrfam": "IPv4", 00:15:58.750 "traddr": "10.0.0.1", 00:15:58.750 "trsvcid": "34306" 00:15:58.750 }, 00:15:58.750 "auth": { 00:15:58.750 "state": "completed", 00:15:58.750 "digest": "sha384", 00:15:58.750 "dhgroup": "ffdhe3072" 00:15:58.750 } 00:15:58.750 } 00:15:58.750 ]' 00:15:58.750 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.009 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.009 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.009 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:59.009 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.010 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.010 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.010 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.269 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:15:59.269 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:15:59.837 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.837 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.837 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.837 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.837 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.837 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.837 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:59.837 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:00.097 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:00.097 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.097 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.097 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:00.097 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:00.097 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.097 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:00.097 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.097 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.097 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.097 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:00.097 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.097 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.097 00:16:00.356 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.356 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.356 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.356 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.356 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.356 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.356 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.356 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.356 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.356 { 00:16:00.356 "cntlid": 71, 00:16:00.356 "qid": 0, 00:16:00.356 "state": "enabled", 00:16:00.356 "thread": "nvmf_tgt_poll_group_000", 00:16:00.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:00.356 "listen_address": { 00:16:00.356 "trtype": "TCP", 00:16:00.356 "adrfam": "IPv4", 00:16:00.356 "traddr": "10.0.0.2", 00:16:00.356 "trsvcid": "4420" 00:16:00.356 }, 00:16:00.356 "peer_address": { 00:16:00.356 "trtype": "TCP", 00:16:00.356 "adrfam": "IPv4", 00:16:00.356 "traddr": "10.0.0.1", 00:16:00.356 "trsvcid": "34322" 00:16:00.356 }, 00:16:00.356 "auth": { 00:16:00.356 "state": "completed", 00:16:00.356 "digest": "sha384", 00:16:00.356 "dhgroup": "ffdhe3072" 00:16:00.356 } 00:16:00.356 } 00:16:00.356 ]' 00:16:00.356 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.615 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.615 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.615 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:00.615 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.615 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.615 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.615 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.874 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:16:00.874 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:16:01.441 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.441 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.441 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.441 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.441 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.441 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:01.441 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.441 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:01.441 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:01.700 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:01.700 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.700 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:01.700 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:01.700 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:01.700 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.700 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.700 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.700 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.700 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.700 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.700 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.700 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.960 00:16:01.960 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.960 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.960 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.960 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.960 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.219 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.219 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.219 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.219 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.219 { 00:16:02.219 "cntlid": 73, 00:16:02.219 "qid": 0, 00:16:02.219 "state": "enabled", 00:16:02.219 "thread": "nvmf_tgt_poll_group_000", 00:16:02.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:02.219 "listen_address": { 00:16:02.219 "trtype": "TCP", 00:16:02.219 "adrfam": "IPv4", 00:16:02.219 "traddr": "10.0.0.2", 00:16:02.219 "trsvcid": "4420" 00:16:02.219 }, 00:16:02.219 "peer_address": { 00:16:02.219 "trtype": "TCP", 00:16:02.219 "adrfam": "IPv4", 00:16:02.219 "traddr": "10.0.0.1", 00:16:02.219 "trsvcid": "34358" 00:16:02.219 }, 00:16:02.219 "auth": { 00:16:02.219 "state": "completed", 00:16:02.219 "digest": "sha384", 00:16:02.219 "dhgroup": "ffdhe4096" 00:16:02.219 } 00:16:02.219 } 00:16:02.219 ]' 00:16:02.219 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.219 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.219 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.219 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:02.219 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.219 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.219 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.219 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.478 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:16:02.478 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:16:03.077 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.078 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:03.078 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.078 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.078 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.078 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.078 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:03.078 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:03.337 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:03.337 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.337 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:03.337 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:03.337 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:03.337 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.337 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.337 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.337 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.337 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.337 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.337 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.337 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.597 00:16:03.597 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.597 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.597 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.597 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.597 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.597 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.597 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.597 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.597 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.597 { 00:16:03.597 "cntlid": 75, 00:16:03.597 "qid": 0, 00:16:03.597 "state": "enabled", 00:16:03.597 "thread": "nvmf_tgt_poll_group_000", 00:16:03.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:03.597 "listen_address": { 00:16:03.597 "trtype": "TCP", 00:16:03.597 "adrfam": "IPv4", 00:16:03.597 "traddr": "10.0.0.2", 00:16:03.597 "trsvcid": "4420" 00:16:03.597 }, 00:16:03.597 "peer_address": { 00:16:03.597 "trtype": "TCP", 00:16:03.597 "adrfam": "IPv4", 00:16:03.597 "traddr": "10.0.0.1", 00:16:03.597 "trsvcid": "34388" 00:16:03.597 }, 00:16:03.597 "auth": { 00:16:03.597 "state": "completed", 00:16:03.597 "digest": "sha384", 00:16:03.597 "dhgroup": "ffdhe4096" 00:16:03.597 } 00:16:03.597 } 00:16:03.597 ]' 00:16:03.597 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.856 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.856 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.856 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:03.856 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.856 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.856 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.856 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.115 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:16:04.115 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:16:04.683 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.683 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.683 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.683 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.683 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.683 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.683 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:04.683 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:04.942 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:04.943 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.943 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.943 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:04.943 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:04.943 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.943 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.943 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.943 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.943 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.943 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.943 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.943 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.202 00:16:05.202 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.202 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.202 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.464 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.464 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.464 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.464 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.464 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.464 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.464 { 00:16:05.464 "cntlid": 77, 00:16:05.464 "qid": 0, 00:16:05.464 "state": "enabled", 00:16:05.464 "thread": "nvmf_tgt_poll_group_000", 00:16:05.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:05.464 "listen_address": { 00:16:05.464 "trtype": "TCP", 00:16:05.464 "adrfam": "IPv4", 00:16:05.464 "traddr": "10.0.0.2", 00:16:05.464 "trsvcid": "4420" 00:16:05.464 }, 00:16:05.464 "peer_address": { 00:16:05.464 "trtype": "TCP", 00:16:05.464 "adrfam": "IPv4", 00:16:05.464 "traddr": "10.0.0.1", 00:16:05.464 "trsvcid": "49388" 00:16:05.464 }, 00:16:05.464 "auth": { 00:16:05.464 "state": "completed", 00:16:05.464 "digest": "sha384", 00:16:05.464 "dhgroup": "ffdhe4096" 00:16:05.464 } 00:16:05.464 } 00:16:05.464 ]' 00:16:05.464 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.464 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.464 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.464 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:05.464 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.464 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.464 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.464 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.723 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:16:05.723 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:16:06.292 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.292 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:06.292 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.292 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.292 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.292 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.292 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:06.292 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:06.552 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:06.552 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.552 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.552 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:06.552 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:06.552 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.552 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:06.552 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.552 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.552 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.552 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:06.552 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.552 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.812 00:16:06.812 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.812 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.812 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.812 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.812 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.812 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.812 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.071 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.071 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.071 { 00:16:07.071 "cntlid": 79, 00:16:07.071 "qid": 0, 00:16:07.071 "state": "enabled", 00:16:07.071 "thread": "nvmf_tgt_poll_group_000", 00:16:07.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:07.071 "listen_address": { 00:16:07.071 "trtype": "TCP", 00:16:07.071 "adrfam": "IPv4", 00:16:07.071 "traddr": "10.0.0.2", 00:16:07.071 "trsvcid": "4420" 00:16:07.071 }, 00:16:07.071 "peer_address": { 00:16:07.071 "trtype": "TCP", 00:16:07.071 "adrfam": "IPv4", 00:16:07.071 "traddr": "10.0.0.1", 00:16:07.071 "trsvcid": "49422" 00:16:07.071 }, 00:16:07.071 "auth": { 00:16:07.071 "state": "completed", 00:16:07.071 "digest": "sha384", 00:16:07.071 "dhgroup": "ffdhe4096" 00:16:07.071 } 00:16:07.071 } 00:16:07.071 ]' 00:16:07.071 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.071 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.071 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.071 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:07.071 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.071 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.071 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.071 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.330 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:16:07.330 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:16:07.900 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.900 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:07.900 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.900 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.900 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.900 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.900 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.900 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:07.900 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:08.160 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:08.160 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.160 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.160 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:08.160 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:08.160 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.160 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.160 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.160 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.160 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.160 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.160 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.160 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.419 00:16:08.419 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.419 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.419 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.679 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.679 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.679 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.679 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.679 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.679 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.679 { 00:16:08.679 "cntlid": 81, 00:16:08.679 "qid": 0, 00:16:08.679 "state": "enabled", 00:16:08.679 "thread": "nvmf_tgt_poll_group_000", 00:16:08.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:08.679 "listen_address": { 00:16:08.679 "trtype": "TCP", 00:16:08.679 "adrfam": "IPv4", 00:16:08.679 "traddr": "10.0.0.2", 00:16:08.679 "trsvcid": "4420" 00:16:08.679 }, 00:16:08.679 "peer_address": { 00:16:08.679 "trtype": "TCP", 00:16:08.679 "adrfam": "IPv4", 00:16:08.679 "traddr": "10.0.0.1", 00:16:08.679 "trsvcid": "49458" 00:16:08.679 }, 00:16:08.679 "auth": { 00:16:08.679 "state": "completed", 00:16:08.679 "digest": "sha384", 00:16:08.679 "dhgroup": "ffdhe6144" 00:16:08.679 } 00:16:08.679 } 00:16:08.679 ]' 00:16:08.679 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.679 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.679 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.679 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:08.679 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.679 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.679 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.679 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.938 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:16:08.938 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:16:09.508 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.508 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:09.508 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.508 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.508 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.508 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.508 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:09.508 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:09.767 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:09.767 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.767 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:09.767 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:09.767 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:09.767 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.767 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.767 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.767 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.767 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.767 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.767 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.767 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.027 00:16:10.027 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.027 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.027 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.285 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.285 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.285 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.285 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.285 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.285 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.285 { 00:16:10.285 "cntlid": 83, 00:16:10.285 "qid": 0, 00:16:10.285 "state": "enabled", 00:16:10.285 "thread": "nvmf_tgt_poll_group_000", 00:16:10.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:10.285 "listen_address": { 00:16:10.285 "trtype": "TCP", 00:16:10.285 "adrfam": "IPv4", 00:16:10.285 "traddr": "10.0.0.2", 00:16:10.285 "trsvcid": "4420" 00:16:10.285 }, 00:16:10.285 "peer_address": { 00:16:10.285 "trtype": "TCP", 00:16:10.285 "adrfam": "IPv4", 00:16:10.285 "traddr": "10.0.0.1", 00:16:10.285 "trsvcid": "49486" 00:16:10.285 }, 00:16:10.285 "auth": { 00:16:10.286 "state": "completed", 00:16:10.286 "digest": "sha384", 00:16:10.286 "dhgroup": "ffdhe6144" 00:16:10.286 } 00:16:10.286 } 00:16:10.286 ]' 00:16:10.286 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.286 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.286 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.286 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:10.286 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.286 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.286 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.286 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.545 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:16:10.545 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:16:11.113 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.113 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.113 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.113 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.113 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.113 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.113 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:11.113 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:11.373 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:11.373 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.373 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.373 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:11.373 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:11.373 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.373 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.373 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.373 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.373 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.373 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.373 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.373 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.649 00:16:11.941 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.941 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.941 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.941 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.941 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.941 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.941 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.941 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.941 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.941 { 00:16:11.941 "cntlid": 85, 00:16:11.941 "qid": 0, 00:16:11.941 "state": "enabled", 00:16:11.941 "thread": "nvmf_tgt_poll_group_000", 00:16:11.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:11.941 "listen_address": { 00:16:11.941 "trtype": "TCP", 00:16:11.941 "adrfam": "IPv4", 00:16:11.941 "traddr": "10.0.0.2", 00:16:11.941 "trsvcid": "4420" 00:16:11.941 }, 00:16:11.941 "peer_address": { 00:16:11.941 "trtype": "TCP", 00:16:11.941 "adrfam": "IPv4", 00:16:11.941 "traddr": "10.0.0.1", 00:16:11.941 "trsvcid": "49500" 00:16:11.941 }, 00:16:11.941 "auth": { 00:16:11.941 "state": "completed", 00:16:11.941 "digest": "sha384", 00:16:11.941 "dhgroup": "ffdhe6144" 00:16:11.941 } 00:16:11.941 } 00:16:11.941 ]' 00:16:11.941 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.941 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:11.941 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.249 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:12.249 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.249 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.249 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.249 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.249 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:16:12.249 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:16:12.839 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.839 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:12.839 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.839 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.839 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.839 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.839 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:12.839 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:13.098 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:13.098 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.098 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.098 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:13.098 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:13.098 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.098 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:13.098 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.098 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.098 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.098 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:13.098 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.098 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.356 00:16:13.356 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.356 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.356 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.616 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.616 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.616 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.616 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.616 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.616 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.616 { 00:16:13.616 "cntlid": 87, 00:16:13.616 "qid": 0, 00:16:13.616 "state": "enabled", 00:16:13.616 "thread": "nvmf_tgt_poll_group_000", 00:16:13.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:13.616 "listen_address": { 00:16:13.616 "trtype": "TCP", 00:16:13.616 "adrfam": "IPv4", 00:16:13.616 "traddr": "10.0.0.2", 00:16:13.616 "trsvcid": "4420" 00:16:13.616 }, 00:16:13.616 "peer_address": { 00:16:13.616 "trtype": "TCP", 00:16:13.616 "adrfam": "IPv4", 00:16:13.616 "traddr": "10.0.0.1", 00:16:13.616 "trsvcid": "49518" 00:16:13.616 }, 00:16:13.616 "auth": { 00:16:13.616 "state": "completed", 00:16:13.616 "digest": "sha384", 00:16:13.616 "dhgroup": "ffdhe6144" 00:16:13.616 } 00:16:13.616 } 00:16:13.616 ]' 00:16:13.616 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.616 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.616 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.876 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:13.876 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.876 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.876 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.876 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.135 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:16:14.135 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.703 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.272 00:16:15.272 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.272 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.272 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.531 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.531 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.531 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.531 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.531 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.531 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.531 { 00:16:15.531 "cntlid": 89, 00:16:15.531 "qid": 0, 00:16:15.531 "state": "enabled", 00:16:15.531 "thread": "nvmf_tgt_poll_group_000", 00:16:15.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:15.531 "listen_address": { 00:16:15.531 "trtype": "TCP", 00:16:15.531 "adrfam": "IPv4", 00:16:15.531 "traddr": "10.0.0.2", 00:16:15.531 "trsvcid": "4420" 00:16:15.531 }, 00:16:15.531 "peer_address": { 00:16:15.531 "trtype": "TCP", 00:16:15.531 "adrfam": "IPv4", 00:16:15.531 "traddr": "10.0.0.1", 00:16:15.531 "trsvcid": "51802" 00:16:15.531 }, 00:16:15.531 "auth": { 00:16:15.531 "state": "completed", 00:16:15.531 "digest": "sha384", 00:16:15.531 "dhgroup": "ffdhe8192" 00:16:15.531 } 00:16:15.531 } 00:16:15.531 ]' 00:16:15.531 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.531 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.531 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.531 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:15.531 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.531 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.531 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.531 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.790 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:16:15.790 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:16:16.359 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.359 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.359 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.359 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.359 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.359 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.359 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:16.359 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:16.618 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:16.618 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.618 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:16.618 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:16.618 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:16.618 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.618 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.618 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.618 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.618 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.618 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.618 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.618 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.187 00:16:17.187 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.187 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.187 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.187 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.187 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.187 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.187 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.447 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.447 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.447 { 00:16:17.447 "cntlid": 91, 00:16:17.447 "qid": 0, 00:16:17.447 "state": "enabled", 00:16:17.447 "thread": "nvmf_tgt_poll_group_000", 00:16:17.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:17.447 "listen_address": { 00:16:17.447 "trtype": "TCP", 00:16:17.447 "adrfam": "IPv4", 00:16:17.447 "traddr": "10.0.0.2", 00:16:17.447 "trsvcid": "4420" 00:16:17.447 }, 00:16:17.447 "peer_address": { 00:16:17.447 "trtype": "TCP", 00:16:17.447 "adrfam": "IPv4", 00:16:17.447 "traddr": "10.0.0.1", 00:16:17.447 "trsvcid": "51844" 00:16:17.447 }, 00:16:17.447 "auth": { 00:16:17.447 "state": "completed", 00:16:17.447 "digest": "sha384", 00:16:17.447 "dhgroup": "ffdhe8192" 00:16:17.447 } 00:16:17.447 } 00:16:17.447 ]' 00:16:17.447 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.447 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.447 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.447 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:17.447 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.447 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.447 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.447 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.706 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:16:17.707 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:16:18.274 14:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.274 14:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.274 14:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.274 14:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.274 14:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.274 14:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.274 14:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.275 14:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.534 14:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:18.534 14:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.534 14:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:18.534 14:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:18.534 14:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:18.534 14:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.534 14:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.534 14:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.534 14:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.534 14:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.534 14:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.534 14:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.534 14:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.793 00:16:18.793 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.793 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.793 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.052 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.052 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.052 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.052 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.052 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.052 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.052 { 00:16:19.052 "cntlid": 93, 00:16:19.052 "qid": 0, 00:16:19.052 "state": "enabled", 00:16:19.052 "thread": "nvmf_tgt_poll_group_000", 00:16:19.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:19.052 "listen_address": { 00:16:19.052 "trtype": "TCP", 00:16:19.052 "adrfam": "IPv4", 00:16:19.052 "traddr": "10.0.0.2", 00:16:19.052 "trsvcid": "4420" 00:16:19.052 }, 00:16:19.052 "peer_address": { 00:16:19.052 "trtype": "TCP", 00:16:19.052 "adrfam": "IPv4", 00:16:19.052 "traddr": "10.0.0.1", 00:16:19.052 "trsvcid": "51880" 00:16:19.052 }, 00:16:19.052 "auth": { 00:16:19.052 "state": "completed", 00:16:19.052 "digest": "sha384", 00:16:19.052 "dhgroup": "ffdhe8192" 00:16:19.052 } 00:16:19.052 } 00:16:19.052 ]' 00:16:19.052 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.052 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.052 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.311 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:19.311 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.311 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.311 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.311 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.570 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:16:19.570 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:16:20.146 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.147 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.147 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.147 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.147 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.147 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.147 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:20.147 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:20.147 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:20.147 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.147 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:20.147 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:20.147 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:20.147 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.147 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:20.147 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.147 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.147 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.147 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:20.147 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.147 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.715 00:16:20.715 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.715 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.715 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.974 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.974 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.974 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.974 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.974 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.974 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.974 { 00:16:20.974 "cntlid": 95, 00:16:20.974 "qid": 0, 00:16:20.974 "state": "enabled", 00:16:20.974 "thread": "nvmf_tgt_poll_group_000", 00:16:20.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:20.974 "listen_address": { 00:16:20.974 "trtype": "TCP", 00:16:20.974 "adrfam": "IPv4", 00:16:20.974 "traddr": "10.0.0.2", 00:16:20.974 "trsvcid": "4420" 00:16:20.974 }, 00:16:20.974 "peer_address": { 00:16:20.974 "trtype": "TCP", 00:16:20.974 "adrfam": "IPv4", 00:16:20.974 "traddr": "10.0.0.1", 00:16:20.974 "trsvcid": "51914" 00:16:20.974 }, 00:16:20.974 "auth": { 00:16:20.974 "state": "completed", 00:16:20.974 "digest": "sha384", 00:16:20.974 "dhgroup": "ffdhe8192" 00:16:20.974 } 00:16:20.974 } 00:16:20.974 ]' 00:16:20.974 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.974 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.974 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.974 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:20.974 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.974 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.974 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.974 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.233 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:16:21.233 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:16:21.802 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.802 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.802 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.802 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.802 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.802 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:21.802 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.802 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.802 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:21.802 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:22.062 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:22.062 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.062 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:22.062 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:22.062 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:22.062 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.062 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.062 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.062 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.062 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.062 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.062 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.062 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.322 00:16:22.322 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.322 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.322 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.581 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.581 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.581 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.581 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.581 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.581 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.581 { 00:16:22.581 "cntlid": 97, 00:16:22.581 "qid": 0, 00:16:22.581 "state": "enabled", 00:16:22.581 "thread": "nvmf_tgt_poll_group_000", 00:16:22.581 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:22.581 "listen_address": { 00:16:22.581 "trtype": "TCP", 00:16:22.581 "adrfam": "IPv4", 00:16:22.581 "traddr": "10.0.0.2", 00:16:22.581 "trsvcid": "4420" 00:16:22.581 }, 00:16:22.581 "peer_address": { 00:16:22.581 "trtype": "TCP", 00:16:22.581 "adrfam": "IPv4", 00:16:22.581 "traddr": "10.0.0.1", 00:16:22.581 "trsvcid": "51940" 00:16:22.581 }, 00:16:22.581 "auth": { 00:16:22.581 "state": "completed", 00:16:22.582 "digest": "sha512", 00:16:22.582 "dhgroup": "null" 00:16:22.582 } 00:16:22.582 } 00:16:22.582 ]' 00:16:22.582 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.582 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.582 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.582 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:22.582 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.582 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.582 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.582 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.841 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:16:22.841 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:16:23.409 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.409 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:23.409 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.409 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.409 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.409 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.409 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:23.409 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:23.669 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:23.669 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.669 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:23.669 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:23.669 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:23.669 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.669 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.669 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.669 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.669 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.669 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.669 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.669 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.928 00:16:23.928 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.928 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.928 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.187 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.187 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.187 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.187 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.187 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.187 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.187 { 00:16:24.187 "cntlid": 99, 00:16:24.187 "qid": 0, 00:16:24.187 "state": "enabled", 00:16:24.187 "thread": "nvmf_tgt_poll_group_000", 00:16:24.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:24.187 "listen_address": { 00:16:24.187 "trtype": "TCP", 00:16:24.187 "adrfam": "IPv4", 00:16:24.187 "traddr": "10.0.0.2", 00:16:24.187 "trsvcid": "4420" 00:16:24.187 }, 00:16:24.187 "peer_address": { 00:16:24.187 "trtype": "TCP", 00:16:24.187 "adrfam": "IPv4", 00:16:24.187 "traddr": "10.0.0.1", 00:16:24.187 "trsvcid": "51954" 00:16:24.187 }, 00:16:24.187 "auth": { 00:16:24.187 "state": "completed", 00:16:24.187 "digest": "sha512", 00:16:24.187 "dhgroup": "null" 00:16:24.187 } 00:16:24.187 } 00:16:24.187 ]' 00:16:24.187 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.187 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.187 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.187 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:24.187 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.187 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.187 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.187 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.446 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:16:24.446 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:16:25.014 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.014 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:25.014 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.014 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.014 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.014 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.014 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:25.014 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:25.274 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:25.274 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.274 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:25.274 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:25.274 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:25.274 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.274 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.274 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.274 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.274 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.274 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.274 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.274 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.533 00:16:25.533 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.533 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.533 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.793 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.793 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.793 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.793 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.793 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.793 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.793 { 00:16:25.793 "cntlid": 101, 00:16:25.793 "qid": 0, 00:16:25.793 "state": "enabled", 00:16:25.793 "thread": "nvmf_tgt_poll_group_000", 00:16:25.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:25.793 "listen_address": { 00:16:25.793 "trtype": "TCP", 00:16:25.793 "adrfam": "IPv4", 00:16:25.793 "traddr": "10.0.0.2", 00:16:25.793 "trsvcid": "4420" 00:16:25.793 }, 00:16:25.793 "peer_address": { 00:16:25.793 "trtype": "TCP", 00:16:25.793 "adrfam": "IPv4", 00:16:25.793 "traddr": "10.0.0.1", 00:16:25.793 "trsvcid": "52318" 00:16:25.793 }, 00:16:25.793 "auth": { 00:16:25.793 "state": "completed", 00:16:25.793 "digest": "sha512", 00:16:25.793 "dhgroup": "null" 00:16:25.793 } 00:16:25.793 } 00:16:25.793 ]' 00:16:25.793 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.793 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.793 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.793 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:25.793 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.793 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.794 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.794 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.053 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:16:26.053 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:16:26.623 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.623 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.623 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.623 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.623 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.623 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.623 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:26.623 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:26.882 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:26.882 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.882 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:26.882 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:26.882 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:26.882 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.882 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:26.882 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.883 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.883 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.883 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:26.883 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.883 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.142 00:16:27.142 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.142 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.142 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.142 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.142 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.142 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.142 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.142 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.142 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.142 { 00:16:27.142 "cntlid": 103, 00:16:27.142 "qid": 0, 00:16:27.142 "state": "enabled", 00:16:27.142 "thread": "nvmf_tgt_poll_group_000", 00:16:27.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:27.142 "listen_address": { 00:16:27.142 "trtype": "TCP", 00:16:27.142 "adrfam": "IPv4", 00:16:27.142 "traddr": "10.0.0.2", 00:16:27.142 "trsvcid": "4420" 00:16:27.142 }, 00:16:27.142 "peer_address": { 00:16:27.142 "trtype": "TCP", 00:16:27.142 "adrfam": "IPv4", 00:16:27.142 "traddr": "10.0.0.1", 00:16:27.142 "trsvcid": "52334" 00:16:27.142 }, 00:16:27.142 "auth": { 00:16:27.142 "state": "completed", 00:16:27.142 "digest": "sha512", 00:16:27.142 "dhgroup": "null" 00:16:27.142 } 00:16:27.142 } 00:16:27.142 ]' 00:16:27.142 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.402 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.402 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.402 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:27.402 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.402 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.402 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.402 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.662 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:16:27.662 14:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:16:28.231 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.231 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.231 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.231 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.231 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.231 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.231 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.231 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:28.231 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:28.231 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:28.231 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.231 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:28.231 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:28.497 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:28.497 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.497 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.497 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.497 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.497 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.497 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.497 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.498 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.498 00:16:28.757 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.757 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.757 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.757 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.757 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.757 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.757 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.757 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.757 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.757 { 00:16:28.757 "cntlid": 105, 00:16:28.757 "qid": 0, 00:16:28.757 "state": "enabled", 00:16:28.757 "thread": "nvmf_tgt_poll_group_000", 00:16:28.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:28.757 "listen_address": { 00:16:28.757 "trtype": "TCP", 00:16:28.757 "adrfam": "IPv4", 00:16:28.757 "traddr": "10.0.0.2", 00:16:28.757 "trsvcid": "4420" 00:16:28.757 }, 00:16:28.757 "peer_address": { 00:16:28.757 "trtype": "TCP", 00:16:28.757 "adrfam": "IPv4", 00:16:28.757 "traddr": "10.0.0.1", 00:16:28.757 "trsvcid": "52370" 00:16:28.757 }, 00:16:28.757 "auth": { 00:16:28.757 "state": "completed", 00:16:28.757 "digest": "sha512", 00:16:28.757 "dhgroup": "ffdhe2048" 00:16:28.757 } 00:16:28.757 } 00:16:28.757 ]' 00:16:28.757 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.757 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.757 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.016 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.016 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.016 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.016 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.016 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.276 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:16:29.276 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:16:29.844 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.844 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.844 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.844 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.844 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.844 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.844 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:29.844 14:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:29.844 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:29.844 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.844 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.844 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:29.844 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:29.844 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.844 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.844 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.844 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.844 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.844 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.845 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.845 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.103 00:16:30.103 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.103 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.103 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.363 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.363 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.363 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.363 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.363 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.363 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.363 { 00:16:30.363 "cntlid": 107, 00:16:30.363 "qid": 0, 00:16:30.363 "state": "enabled", 00:16:30.363 "thread": "nvmf_tgt_poll_group_000", 00:16:30.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:30.363 "listen_address": { 00:16:30.363 "trtype": "TCP", 00:16:30.363 "adrfam": "IPv4", 00:16:30.363 "traddr": "10.0.0.2", 00:16:30.363 "trsvcid": "4420" 00:16:30.363 }, 00:16:30.363 "peer_address": { 00:16:30.363 "trtype": "TCP", 00:16:30.363 "adrfam": "IPv4", 00:16:30.363 "traddr": "10.0.0.1", 00:16:30.363 "trsvcid": "52390" 00:16:30.363 }, 00:16:30.363 "auth": { 00:16:30.363 "state": "completed", 00:16:30.363 "digest": "sha512", 00:16:30.363 "dhgroup": "ffdhe2048" 00:16:30.363 } 00:16:30.363 } 00:16:30.363 ]' 00:16:30.363 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.363 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.363 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.363 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:30.363 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.623 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.623 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.623 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.623 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:16:30.623 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:16:31.190 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.450 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:31.450 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.450 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.450 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.450 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.450 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:31.450 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:31.450 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:31.450 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.450 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.450 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:31.450 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:31.450 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.450 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.450 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.450 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.450 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.450 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.450 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.450 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.709 00:16:31.709 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.709 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.709 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.968 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.968 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.968 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.968 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.968 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.968 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.968 { 00:16:31.968 "cntlid": 109, 00:16:31.968 "qid": 0, 00:16:31.968 "state": "enabled", 00:16:31.968 "thread": "nvmf_tgt_poll_group_000", 00:16:31.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:31.968 "listen_address": { 00:16:31.968 "trtype": "TCP", 00:16:31.968 "adrfam": "IPv4", 00:16:31.968 "traddr": "10.0.0.2", 00:16:31.968 "trsvcid": "4420" 00:16:31.968 }, 00:16:31.968 "peer_address": { 00:16:31.968 "trtype": "TCP", 00:16:31.968 "adrfam": "IPv4", 00:16:31.968 "traddr": "10.0.0.1", 00:16:31.968 "trsvcid": "52406" 00:16:31.968 }, 00:16:31.968 "auth": { 00:16:31.968 "state": "completed", 00:16:31.968 "digest": "sha512", 00:16:31.968 "dhgroup": "ffdhe2048" 00:16:31.968 } 00:16:31.968 } 00:16:31.968 ]' 00:16:31.968 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.968 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.968 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.227 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:32.227 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.227 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.227 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.227 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.227 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:16:32.227 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:16:32.795 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.795 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.795 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.795 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.055 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.055 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.055 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:33.055 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:33.055 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:33.055 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.055 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.055 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:33.055 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:33.055 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.055 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:33.055 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.055 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.055 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.055 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:33.055 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.055 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.314 00:16:33.314 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.314 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.314 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.573 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.573 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.573 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.573 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.573 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.573 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.573 { 00:16:33.573 "cntlid": 111, 00:16:33.573 "qid": 0, 00:16:33.573 "state": "enabled", 00:16:33.573 "thread": "nvmf_tgt_poll_group_000", 00:16:33.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:33.573 "listen_address": { 00:16:33.573 "trtype": "TCP", 00:16:33.573 "adrfam": "IPv4", 00:16:33.573 "traddr": "10.0.0.2", 00:16:33.573 "trsvcid": "4420" 00:16:33.573 }, 00:16:33.573 "peer_address": { 00:16:33.573 "trtype": "TCP", 00:16:33.573 "adrfam": "IPv4", 00:16:33.573 "traddr": "10.0.0.1", 00:16:33.573 "trsvcid": "52434" 00:16:33.573 }, 00:16:33.573 "auth": { 00:16:33.573 "state": "completed", 00:16:33.573 "digest": "sha512", 00:16:33.573 "dhgroup": "ffdhe2048" 00:16:33.573 } 00:16:33.573 } 00:16:33.573 ]' 00:16:33.573 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.573 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.573 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.573 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:33.573 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.832 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.832 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.832 14:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.832 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:16:33.832 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:16:34.401 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.401 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:34.401 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.401 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.401 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.401 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.401 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.401 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:34.401 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:34.660 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:34.660 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.660 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:34.660 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:34.660 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:34.660 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.660 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.660 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.660 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.660 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.660 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.660 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.660 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.919 00:16:34.919 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.919 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.919 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.178 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.178 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.178 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.178 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.178 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.178 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.178 { 00:16:35.178 "cntlid": 113, 00:16:35.178 "qid": 0, 00:16:35.178 "state": "enabled", 00:16:35.178 "thread": "nvmf_tgt_poll_group_000", 00:16:35.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:35.178 "listen_address": { 00:16:35.178 "trtype": "TCP", 00:16:35.178 "adrfam": "IPv4", 00:16:35.178 "traddr": "10.0.0.2", 00:16:35.178 "trsvcid": "4420" 00:16:35.178 }, 00:16:35.178 "peer_address": { 00:16:35.178 "trtype": "TCP", 00:16:35.178 "adrfam": "IPv4", 00:16:35.178 "traddr": "10.0.0.1", 00:16:35.178 "trsvcid": "48984" 00:16:35.178 }, 00:16:35.178 "auth": { 00:16:35.178 "state": "completed", 00:16:35.178 "digest": "sha512", 00:16:35.178 "dhgroup": "ffdhe3072" 00:16:35.178 } 00:16:35.178 } 00:16:35.178 ]' 00:16:35.178 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.178 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.178 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.178 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:35.178 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.438 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.438 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.438 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.438 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:16:35.438 14:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:16:36.007 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.266 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.266 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.266 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.266 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.266 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.267 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:36.267 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:36.267 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:36.267 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.267 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.267 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:36.267 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:36.267 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.267 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.267 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.267 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.267 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.267 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.267 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.267 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.526 00:16:36.786 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.786 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.786 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.786 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.786 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.786 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.786 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.786 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.786 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.786 { 00:16:36.786 "cntlid": 115, 00:16:36.786 "qid": 0, 00:16:36.786 "state": "enabled", 00:16:36.786 "thread": "nvmf_tgt_poll_group_000", 00:16:36.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:36.786 "listen_address": { 00:16:36.786 "trtype": "TCP", 00:16:36.786 "adrfam": "IPv4", 00:16:36.786 "traddr": "10.0.0.2", 00:16:36.786 "trsvcid": "4420" 00:16:36.786 }, 00:16:36.786 "peer_address": { 00:16:36.786 "trtype": "TCP", 00:16:36.786 "adrfam": "IPv4", 00:16:36.786 "traddr": "10.0.0.1", 00:16:36.786 "trsvcid": "48998" 00:16:36.786 }, 00:16:36.786 "auth": { 00:16:36.786 "state": "completed", 00:16:36.786 "digest": "sha512", 00:16:36.786 "dhgroup": "ffdhe3072" 00:16:36.786 } 00:16:36.786 } 00:16:36.786 ]' 00:16:36.786 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.786 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.786 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.045 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:37.045 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.045 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.045 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.045 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.304 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:16:37.304 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:16:37.873 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.873 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.873 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.873 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.873 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.873 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.873 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:37.873 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:37.873 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:37.873 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.873 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.873 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:37.873 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:37.873 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.873 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.873 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.873 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.873 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.873 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.873 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.873 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.132 00:16:38.133 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.133 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.133 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.392 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.392 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.392 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.392 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.392 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.392 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.392 { 00:16:38.392 "cntlid": 117, 00:16:38.392 "qid": 0, 00:16:38.392 "state": "enabled", 00:16:38.392 "thread": "nvmf_tgt_poll_group_000", 00:16:38.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:38.392 "listen_address": { 00:16:38.392 "trtype": "TCP", 00:16:38.392 "adrfam": "IPv4", 00:16:38.392 "traddr": "10.0.0.2", 00:16:38.392 "trsvcid": "4420" 00:16:38.392 }, 00:16:38.392 "peer_address": { 00:16:38.392 "trtype": "TCP", 00:16:38.392 "adrfam": "IPv4", 00:16:38.392 "traddr": "10.0.0.1", 00:16:38.392 "trsvcid": "49018" 00:16:38.392 }, 00:16:38.392 "auth": { 00:16:38.392 "state": "completed", 00:16:38.392 "digest": "sha512", 00:16:38.392 "dhgroup": "ffdhe3072" 00:16:38.392 } 00:16:38.392 } 00:16:38.392 ]' 00:16:38.392 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.392 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.392 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.652 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:38.652 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.652 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.652 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.652 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.911 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:16:38.911 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:16:39.480 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.480 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.480 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.480 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.480 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.480 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.480 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:39.480 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:39.480 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:39.480 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.480 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:39.480 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:39.480 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:39.480 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.480 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:39.480 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.480 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.480 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.480 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:39.480 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.480 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.740 00:16:39.740 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.740 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.740 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.999 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.999 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.999 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.999 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.999 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.999 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.999 { 00:16:39.999 "cntlid": 119, 00:16:39.999 "qid": 0, 00:16:39.999 "state": "enabled", 00:16:39.999 "thread": "nvmf_tgt_poll_group_000", 00:16:39.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:39.999 "listen_address": { 00:16:39.999 "trtype": "TCP", 00:16:39.999 "adrfam": "IPv4", 00:16:39.999 "traddr": "10.0.0.2", 00:16:39.999 "trsvcid": "4420" 00:16:39.999 }, 00:16:39.999 "peer_address": { 00:16:39.999 "trtype": "TCP", 00:16:39.999 "adrfam": "IPv4", 00:16:39.999 "traddr": "10.0.0.1", 00:16:39.999 "trsvcid": "49056" 00:16:39.999 }, 00:16:39.999 "auth": { 00:16:39.999 "state": "completed", 00:16:39.999 "digest": "sha512", 00:16:39.999 "dhgroup": "ffdhe3072" 00:16:39.999 } 00:16:39.999 } 00:16:39.999 ]' 00:16:39.999 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.999 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.999 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.258 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:40.258 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.258 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.258 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.258 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.518 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:16:40.518 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.088 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.347 00:16:41.605 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.605 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.605 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.605 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.605 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.605 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.605 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.605 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.605 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.605 { 00:16:41.605 "cntlid": 121, 00:16:41.605 "qid": 0, 00:16:41.605 "state": "enabled", 00:16:41.605 "thread": "nvmf_tgt_poll_group_000", 00:16:41.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:41.605 "listen_address": { 00:16:41.605 "trtype": "TCP", 00:16:41.605 "adrfam": "IPv4", 00:16:41.605 "traddr": "10.0.0.2", 00:16:41.605 "trsvcid": "4420" 00:16:41.605 }, 00:16:41.605 "peer_address": { 00:16:41.605 "trtype": "TCP", 00:16:41.605 "adrfam": "IPv4", 00:16:41.605 "traddr": "10.0.0.1", 00:16:41.605 "trsvcid": "49088" 00:16:41.605 }, 00:16:41.605 "auth": { 00:16:41.605 "state": "completed", 00:16:41.605 "digest": "sha512", 00:16:41.605 "dhgroup": "ffdhe4096" 00:16:41.605 } 00:16:41.605 } 00:16:41.605 ]' 00:16:41.605 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.863 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.863 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.863 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:41.863 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.864 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.864 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.864 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.122 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:16:42.122 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:16:42.691 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.691 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.691 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.691 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.691 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.691 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.691 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:42.691 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:42.691 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:42.950 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.950 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:42.950 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:42.950 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:42.950 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.950 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.950 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.950 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.950 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.950 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.950 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.950 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.209 00:16:43.209 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.209 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.209 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.209 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.209 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.209 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.209 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.209 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.209 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.209 { 00:16:43.209 "cntlid": 123, 00:16:43.209 "qid": 0, 00:16:43.209 "state": "enabled", 00:16:43.209 "thread": "nvmf_tgt_poll_group_000", 00:16:43.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:43.209 "listen_address": { 00:16:43.209 "trtype": "TCP", 00:16:43.209 "adrfam": "IPv4", 00:16:43.209 "traddr": "10.0.0.2", 00:16:43.209 "trsvcid": "4420" 00:16:43.209 }, 00:16:43.209 "peer_address": { 00:16:43.209 "trtype": "TCP", 00:16:43.209 "adrfam": "IPv4", 00:16:43.209 "traddr": "10.0.0.1", 00:16:43.209 "trsvcid": "49108" 00:16:43.209 }, 00:16:43.209 "auth": { 00:16:43.209 "state": "completed", 00:16:43.209 "digest": "sha512", 00:16:43.209 "dhgroup": "ffdhe4096" 00:16:43.209 } 00:16:43.209 } 00:16:43.209 ]' 00:16:43.209 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.468 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.468 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.468 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:43.468 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.468 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.468 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.468 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.727 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:16:43.727 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:16:44.295 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.295 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.295 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.295 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.295 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.295 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.295 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:44.295 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:44.554 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:44.554 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.554 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.554 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:44.554 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:44.554 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.554 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.554 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.554 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.554 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.554 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.554 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.554 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.813 00:16:44.813 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.813 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.813 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.072 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.072 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.072 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.072 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.072 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.072 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.072 { 00:16:45.072 "cntlid": 125, 00:16:45.072 "qid": 0, 00:16:45.072 "state": "enabled", 00:16:45.072 "thread": "nvmf_tgt_poll_group_000", 00:16:45.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:45.072 "listen_address": { 00:16:45.072 "trtype": "TCP", 00:16:45.072 "adrfam": "IPv4", 00:16:45.072 "traddr": "10.0.0.2", 00:16:45.072 "trsvcid": "4420" 00:16:45.072 }, 00:16:45.072 "peer_address": { 00:16:45.072 "trtype": "TCP", 00:16:45.072 "adrfam": "IPv4", 00:16:45.072 "traddr": "10.0.0.1", 00:16:45.072 "trsvcid": "51942" 00:16:45.072 }, 00:16:45.072 "auth": { 00:16:45.072 "state": "completed", 00:16:45.072 "digest": "sha512", 00:16:45.072 "dhgroup": "ffdhe4096" 00:16:45.072 } 00:16:45.072 } 00:16:45.072 ]' 00:16:45.072 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.072 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.072 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.072 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.072 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.072 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.072 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.072 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.332 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:16:45.332 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:16:45.900 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.900 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.900 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.900 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.900 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.900 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.900 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:45.900 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:46.160 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:46.160 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.160 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.160 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:46.160 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:46.160 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.160 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:46.160 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.160 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.160 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.160 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.160 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.160 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.419 00:16:46.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.678 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.678 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.678 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.678 { 00:16:46.678 "cntlid": 127, 00:16:46.678 "qid": 0, 00:16:46.678 "state": "enabled", 00:16:46.678 "thread": "nvmf_tgt_poll_group_000", 00:16:46.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:46.678 "listen_address": { 00:16:46.678 "trtype": "TCP", 00:16:46.678 "adrfam": "IPv4", 00:16:46.678 "traddr": "10.0.0.2", 00:16:46.678 "trsvcid": "4420" 00:16:46.678 }, 00:16:46.678 "peer_address": { 00:16:46.678 "trtype": "TCP", 00:16:46.678 "adrfam": "IPv4", 00:16:46.678 "traddr": "10.0.0.1", 00:16:46.678 "trsvcid": "51964" 00:16:46.678 }, 00:16:46.678 "auth": { 00:16:46.678 "state": "completed", 00:16:46.678 "digest": "sha512", 00:16:46.678 "dhgroup": "ffdhe4096" 00:16:46.678 } 00:16:46.678 } 00:16:46.678 ]' 00:16:46.678 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.678 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.678 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.678 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.678 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.678 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.678 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.678 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.937 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:16:46.937 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:16:47.506 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.506 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.506 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.506 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.506 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.506 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.506 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.506 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:47.506 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:47.765 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:47.765 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.765 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.765 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:47.765 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:47.765 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.765 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.765 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.765 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.765 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.765 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.765 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.765 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.024 00:16:48.024 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.024 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.024 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.283 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.283 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.283 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.283 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.283 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.283 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.283 { 00:16:48.283 "cntlid": 129, 00:16:48.283 "qid": 0, 00:16:48.283 "state": "enabled", 00:16:48.283 "thread": "nvmf_tgt_poll_group_000", 00:16:48.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:48.283 "listen_address": { 00:16:48.283 "trtype": "TCP", 00:16:48.283 "adrfam": "IPv4", 00:16:48.283 "traddr": "10.0.0.2", 00:16:48.283 "trsvcid": "4420" 00:16:48.283 }, 00:16:48.283 "peer_address": { 00:16:48.283 "trtype": "TCP", 00:16:48.283 "adrfam": "IPv4", 00:16:48.283 "traddr": "10.0.0.1", 00:16:48.284 "trsvcid": "52002" 00:16:48.284 }, 00:16:48.284 "auth": { 00:16:48.284 "state": "completed", 00:16:48.284 "digest": "sha512", 00:16:48.284 "dhgroup": "ffdhe6144" 00:16:48.284 } 00:16:48.284 } 00:16:48.284 ]' 00:16:48.284 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.284 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.284 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.284 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:48.284 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.284 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.284 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.284 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.542 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:16:48.542 14:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:16:49.109 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.109 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.109 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.109 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.109 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.109 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.110 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:49.110 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:49.369 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:49.369 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.369 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.369 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:49.369 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:49.369 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.369 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.369 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.369 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.369 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.369 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.369 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.369 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.628 00:16:49.628 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.628 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.628 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.887 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.887 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.887 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.887 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.887 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.887 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.887 { 00:16:49.887 "cntlid": 131, 00:16:49.887 "qid": 0, 00:16:49.887 "state": "enabled", 00:16:49.887 "thread": "nvmf_tgt_poll_group_000", 00:16:49.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:49.887 "listen_address": { 00:16:49.887 "trtype": "TCP", 00:16:49.887 "adrfam": "IPv4", 00:16:49.887 "traddr": "10.0.0.2", 00:16:49.887 "trsvcid": "4420" 00:16:49.887 }, 00:16:49.887 "peer_address": { 00:16:49.887 "trtype": "TCP", 00:16:49.887 "adrfam": "IPv4", 00:16:49.887 "traddr": "10.0.0.1", 00:16:49.887 "trsvcid": "52022" 00:16:49.887 }, 00:16:49.887 "auth": { 00:16:49.887 "state": "completed", 00:16:49.887 "digest": "sha512", 00:16:49.887 "dhgroup": "ffdhe6144" 00:16:49.887 } 00:16:49.887 } 00:16:49.887 ]' 00:16:49.887 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.887 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.887 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.887 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.887 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.147 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.147 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.147 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.147 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:16:50.147 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:16:50.716 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.716 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.716 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.716 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.716 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.716 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.716 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:50.716 14:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:50.974 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:50.974 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.974 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.974 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:50.974 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:50.974 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.974 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.974 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.974 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.974 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.974 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.974 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.975 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.543 00:16:51.543 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.543 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.543 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.543 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.543 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.543 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.543 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.543 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.543 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.543 { 00:16:51.543 "cntlid": 133, 00:16:51.543 "qid": 0, 00:16:51.543 "state": "enabled", 00:16:51.543 "thread": "nvmf_tgt_poll_group_000", 00:16:51.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:51.543 "listen_address": { 00:16:51.543 "trtype": "TCP", 00:16:51.543 "adrfam": "IPv4", 00:16:51.543 "traddr": "10.0.0.2", 00:16:51.543 "trsvcid": "4420" 00:16:51.543 }, 00:16:51.543 "peer_address": { 00:16:51.543 "trtype": "TCP", 00:16:51.543 "adrfam": "IPv4", 00:16:51.543 "traddr": "10.0.0.1", 00:16:51.543 "trsvcid": "52044" 00:16:51.543 }, 00:16:51.543 "auth": { 00:16:51.543 "state": "completed", 00:16:51.543 "digest": "sha512", 00:16:51.543 "dhgroup": "ffdhe6144" 00:16:51.543 } 00:16:51.543 } 00:16:51.543 ]' 00:16:51.543 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.543 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.543 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.802 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.802 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.802 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.802 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.802 14:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.061 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:16:52.061 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:16:52.629 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.629 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.629 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.629 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.629 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.629 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.629 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:52.629 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:52.629 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:52.629 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.629 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.629 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:52.629 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:52.629 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.629 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:52.629 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.629 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.630 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.630 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.630 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.630 14:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.038 00:16:53.038 14:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.038 14:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.038 14:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.336 14:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.336 14:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.336 14:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.336 14:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.336 14:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.336 14:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.336 { 00:16:53.336 "cntlid": 135, 00:16:53.336 "qid": 0, 00:16:53.336 "state": "enabled", 00:16:53.336 "thread": "nvmf_tgt_poll_group_000", 00:16:53.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:53.336 "listen_address": { 00:16:53.336 "trtype": "TCP", 00:16:53.336 "adrfam": "IPv4", 00:16:53.336 "traddr": "10.0.0.2", 00:16:53.336 "trsvcid": "4420" 00:16:53.336 }, 00:16:53.336 "peer_address": { 00:16:53.336 "trtype": "TCP", 00:16:53.336 "adrfam": "IPv4", 00:16:53.336 "traddr": "10.0.0.1", 00:16:53.336 "trsvcid": "52082" 00:16:53.336 }, 00:16:53.336 "auth": { 00:16:53.336 "state": "completed", 00:16:53.336 "digest": "sha512", 00:16:53.336 "dhgroup": "ffdhe6144" 00:16:53.336 } 00:16:53.336 } 00:16:53.336 ]' 00:16:53.336 14:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.336 14:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.336 14:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.336 14:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.336 14:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.336 14:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.336 14:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.336 14:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.632 14:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:16:53.632 14:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:16:54.208 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.209 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.209 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.209 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.209 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.209 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.209 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.209 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:54.209 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:54.468 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:54.468 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.468 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:54.468 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:54.468 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:54.468 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.468 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.468 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.468 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.468 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.468 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.468 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.468 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.035 00:16:55.035 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.035 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.035 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.035 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.035 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.035 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.035 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.035 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.035 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.035 { 00:16:55.035 "cntlid": 137, 00:16:55.035 "qid": 0, 00:16:55.035 "state": "enabled", 00:16:55.035 "thread": "nvmf_tgt_poll_group_000", 00:16:55.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:55.035 "listen_address": { 00:16:55.035 "trtype": "TCP", 00:16:55.035 "adrfam": "IPv4", 00:16:55.035 "traddr": "10.0.0.2", 00:16:55.035 "trsvcid": "4420" 00:16:55.035 }, 00:16:55.035 "peer_address": { 00:16:55.035 "trtype": "TCP", 00:16:55.035 "adrfam": "IPv4", 00:16:55.035 "traddr": "10.0.0.1", 00:16:55.035 "trsvcid": "38404" 00:16:55.035 }, 00:16:55.035 "auth": { 00:16:55.035 "state": "completed", 00:16:55.035 "digest": "sha512", 00:16:55.036 "dhgroup": "ffdhe8192" 00:16:55.036 } 00:16:55.036 } 00:16:55.036 ]' 00:16:55.036 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.036 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.036 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.294 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.294 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.294 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.294 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.294 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.552 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:16:55.552 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:16:56.120 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.120 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.120 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.120 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.120 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.120 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.120 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:56.120 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:56.120 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:56.120 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.120 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:56.120 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:56.120 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:56.120 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.120 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.120 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.120 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.120 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.120 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.120 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.120 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.687 00:16:56.687 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.687 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.687 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.945 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.945 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.945 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.945 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.945 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.945 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.945 { 00:16:56.945 "cntlid": 139, 00:16:56.945 "qid": 0, 00:16:56.945 "state": "enabled", 00:16:56.945 "thread": "nvmf_tgt_poll_group_000", 00:16:56.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:56.945 "listen_address": { 00:16:56.945 "trtype": "TCP", 00:16:56.945 "adrfam": "IPv4", 00:16:56.945 "traddr": "10.0.0.2", 00:16:56.945 "trsvcid": "4420" 00:16:56.945 }, 00:16:56.945 "peer_address": { 00:16:56.945 "trtype": "TCP", 00:16:56.945 "adrfam": "IPv4", 00:16:56.945 "traddr": "10.0.0.1", 00:16:56.945 "trsvcid": "38442" 00:16:56.945 }, 00:16:56.945 "auth": { 00:16:56.945 "state": "completed", 00:16:56.945 "digest": "sha512", 00:16:56.945 "dhgroup": "ffdhe8192" 00:16:56.945 } 00:16:56.945 } 00:16:56.945 ]' 00:16:56.945 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.945 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.945 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.945 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.945 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.945 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.946 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.946 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.204 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:16:57.204 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: --dhchap-ctrl-secret DHHC-1:02:ODJiYmVkMDZhOTgyYjA5ZDExNjgzYTk5YWMxNjM2YzcwNDU5NzU3ZWNmZmJlNjg0jCiQhA==: 00:16:57.771 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.771 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.771 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.771 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.771 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.771 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.771 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:57.771 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:58.030 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:58.030 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.030 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.030 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.030 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:58.030 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.030 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.030 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.030 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.030 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.030 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.030 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.030 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.598 00:16:58.598 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.598 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.598 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.856 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.857 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.857 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.857 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.857 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.857 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.857 { 00:16:58.857 "cntlid": 141, 00:16:58.857 "qid": 0, 00:16:58.857 "state": "enabled", 00:16:58.857 "thread": "nvmf_tgt_poll_group_000", 00:16:58.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:58.857 "listen_address": { 00:16:58.857 "trtype": "TCP", 00:16:58.857 "adrfam": "IPv4", 00:16:58.857 "traddr": "10.0.0.2", 00:16:58.857 "trsvcid": "4420" 00:16:58.857 }, 00:16:58.857 "peer_address": { 00:16:58.857 "trtype": "TCP", 00:16:58.857 "adrfam": "IPv4", 00:16:58.857 "traddr": "10.0.0.1", 00:16:58.857 "trsvcid": "38470" 00:16:58.857 }, 00:16:58.857 "auth": { 00:16:58.857 "state": "completed", 00:16:58.857 "digest": "sha512", 00:16:58.857 "dhgroup": "ffdhe8192" 00:16:58.857 } 00:16:58.857 } 00:16:58.857 ]' 00:16:58.857 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.857 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.857 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.857 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.857 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.857 14:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.857 14:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.857 14:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.115 14:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:16:59.115 14:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:01:ZGE0NGIyZjgyYTZjNDdmZDRlYmU5ZDE3MGMyNmYxYjNIL7U5: 00:16:59.682 14:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.682 14:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.682 14:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.682 14:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.682 14:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.682 14:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.682 14:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:59.682 14:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:59.941 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:59.941 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.941 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.941 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:59.941 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:59.941 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.941 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:59.941 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.941 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.941 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.941 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:59.941 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.941 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.507 00:17:00.507 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.507 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.507 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.507 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.507 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.507 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.507 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.766 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.766 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.766 { 00:17:00.766 "cntlid": 143, 00:17:00.766 "qid": 0, 00:17:00.766 "state": "enabled", 00:17:00.766 "thread": "nvmf_tgt_poll_group_000", 00:17:00.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:00.766 "listen_address": { 00:17:00.766 "trtype": "TCP", 00:17:00.766 "adrfam": "IPv4", 00:17:00.766 "traddr": "10.0.0.2", 00:17:00.766 "trsvcid": "4420" 00:17:00.766 }, 00:17:00.766 "peer_address": { 00:17:00.766 "trtype": "TCP", 00:17:00.766 "adrfam": "IPv4", 00:17:00.766 "traddr": "10.0.0.1", 00:17:00.766 "trsvcid": "38492" 00:17:00.766 }, 00:17:00.766 "auth": { 00:17:00.766 "state": "completed", 00:17:00.766 "digest": "sha512", 00:17:00.766 "dhgroup": "ffdhe8192" 00:17:00.766 } 00:17:00.766 } 00:17:00.766 ]' 00:17:00.766 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.766 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.766 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.766 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.766 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.766 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.766 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.766 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.025 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:17:01.025 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:17:01.592 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.592 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.592 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.592 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.592 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.592 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:01.592 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:01.592 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:01.592 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:01.592 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:01.592 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:01.850 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:01.850 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.850 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:01.850 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:01.850 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:01.850 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.850 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.850 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.850 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.851 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.851 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.851 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.851 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.109 00:17:02.367 14:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.367 14:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.367 14:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.367 14:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.367 14:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.367 14:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.367 14:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.367 14:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.367 14:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.367 { 00:17:02.367 "cntlid": 145, 00:17:02.367 "qid": 0, 00:17:02.367 "state": "enabled", 00:17:02.367 "thread": "nvmf_tgt_poll_group_000", 00:17:02.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:02.367 "listen_address": { 00:17:02.367 "trtype": "TCP", 00:17:02.367 "adrfam": "IPv4", 00:17:02.367 "traddr": "10.0.0.2", 00:17:02.367 "trsvcid": "4420" 00:17:02.367 }, 00:17:02.367 "peer_address": { 00:17:02.367 "trtype": "TCP", 00:17:02.367 "adrfam": "IPv4", 00:17:02.367 "traddr": "10.0.0.1", 00:17:02.367 "trsvcid": "38512" 00:17:02.367 }, 00:17:02.367 "auth": { 00:17:02.367 "state": "completed", 00:17:02.367 "digest": "sha512", 00:17:02.367 "dhgroup": "ffdhe8192" 00:17:02.367 } 00:17:02.367 } 00:17:02.367 ]' 00:17:02.367 14:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.367 14:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.625 14:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.625 14:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.625 14:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.625 14:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.625 14:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.625 14:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.883 14:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:17:02.883 14:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM0ODAxYjMzODYzNjEyZWUwMzE3YTkxOTE5ZjVhMDE2NjI1ZWFlYWYwZWZlYzBmucru+w==: --dhchap-ctrl-secret DHHC-1:03:YWYzODI1NjgxN2Q0ZTQxNDU4YWZhMGNiZTcwZWFjMDgzMTYwOTA3MjhjY2E4NDkyOTU2ZWM4M2MzMTMzNzVlYSejUTc=: 00:17:03.450 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.450 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.450 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.450 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.450 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.450 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:03.450 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.450 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.450 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.450 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:03.450 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:03.450 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:03.450 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:03.450 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.450 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:03.450 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.450 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:03.450 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:03.450 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:03.709 request: 00:17:03.709 { 00:17:03.709 "name": "nvme0", 00:17:03.709 "trtype": "tcp", 00:17:03.709 "traddr": "10.0.0.2", 00:17:03.709 "adrfam": "ipv4", 00:17:03.709 "trsvcid": "4420", 00:17:03.709 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:03.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:03.709 "prchk_reftag": false, 00:17:03.709 "prchk_guard": false, 00:17:03.709 "hdgst": false, 00:17:03.709 "ddgst": false, 00:17:03.709 "dhchap_key": "key2", 00:17:03.709 "allow_unrecognized_csi": false, 00:17:03.709 "method": "bdev_nvme_attach_controller", 00:17:03.709 "req_id": 1 00:17:03.709 } 00:17:03.709 Got JSON-RPC error response 00:17:03.709 response: 00:17:03.709 { 00:17:03.709 "code": -5, 00:17:03.709 "message": "Input/output error" 00:17:03.709 } 00:17:03.709 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:03.709 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:03.709 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:03.709 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:03.709 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.709 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.709 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.709 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.709 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.709 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.709 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.968 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.968 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:03.968 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:03.968 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:03.968 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:03.968 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.968 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:03.968 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.968 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:03.968 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:03.968 14:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:04.229 request: 00:17:04.229 { 00:17:04.229 "name": "nvme0", 00:17:04.229 "trtype": "tcp", 00:17:04.229 "traddr": "10.0.0.2", 00:17:04.229 "adrfam": "ipv4", 00:17:04.229 "trsvcid": "4420", 00:17:04.229 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:04.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:04.229 "prchk_reftag": false, 00:17:04.229 "prchk_guard": false, 00:17:04.229 "hdgst": false, 00:17:04.229 "ddgst": false, 00:17:04.229 "dhchap_key": "key1", 00:17:04.229 "dhchap_ctrlr_key": "ckey2", 00:17:04.229 "allow_unrecognized_csi": false, 00:17:04.229 "method": "bdev_nvme_attach_controller", 00:17:04.229 "req_id": 1 00:17:04.229 } 00:17:04.229 Got JSON-RPC error response 00:17:04.229 response: 00:17:04.229 { 00:17:04.229 "code": -5, 00:17:04.229 "message": "Input/output error" 00:17:04.229 } 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.229 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.800 request: 00:17:04.800 { 00:17:04.800 "name": "nvme0", 00:17:04.800 "trtype": "tcp", 00:17:04.800 "traddr": "10.0.0.2", 00:17:04.800 "adrfam": "ipv4", 00:17:04.800 "trsvcid": "4420", 00:17:04.800 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:04.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:04.800 "prchk_reftag": false, 00:17:04.800 "prchk_guard": false, 00:17:04.800 "hdgst": false, 00:17:04.800 "ddgst": false, 00:17:04.800 "dhchap_key": "key1", 00:17:04.800 "dhchap_ctrlr_key": "ckey1", 00:17:04.800 "allow_unrecognized_csi": false, 00:17:04.800 "method": "bdev_nvme_attach_controller", 00:17:04.800 "req_id": 1 00:17:04.800 } 00:17:04.800 Got JSON-RPC error response 00:17:04.800 response: 00:17:04.800 { 00:17:04.800 "code": -5, 00:17:04.800 "message": "Input/output error" 00:17:04.800 } 00:17:04.800 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:04.800 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:04.800 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:04.800 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:04.800 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.800 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.800 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.800 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.800 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1442236 00:17:04.800 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1442236 ']' 00:17:04.800 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1442236 00:17:04.800 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:04.800 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.800 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1442236 00:17:04.800 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.800 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.800 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1442236' 00:17:04.800 killing process with pid 1442236 00:17:04.800 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1442236 00:17:04.800 14:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1442236 00:17:05.061 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:05.061 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:05.061 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:05.061 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.061 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1463964 00:17:05.061 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:05.061 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1463964 00:17:05.061 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1463964 ']' 00:17:05.061 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.061 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.061 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.061 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.061 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.000 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.000 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:06.000 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:06.000 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:06.000 14:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.000 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.000 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:06.000 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1463964 00:17:06.000 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1463964 ']' 00:17:06.000 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.000 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.000 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.000 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.000 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.260 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.260 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:06.260 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:06.260 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.260 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.260 null0 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.K72 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.PiG ]] 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PiG 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.cDt 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.cf4 ]] 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cf4 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.jr8 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.T9k ]] 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.T9k 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.E5I 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.261 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.199 nvme0n1 00:17:07.199 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.199 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.199 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.199 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.199 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.199 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.199 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.199 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.199 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.199 { 00:17:07.199 "cntlid": 1, 00:17:07.199 "qid": 0, 00:17:07.199 "state": "enabled", 00:17:07.199 "thread": "nvmf_tgt_poll_group_000", 00:17:07.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:07.199 "listen_address": { 00:17:07.199 "trtype": "TCP", 00:17:07.199 "adrfam": "IPv4", 00:17:07.199 "traddr": "10.0.0.2", 00:17:07.199 "trsvcid": "4420" 00:17:07.199 }, 00:17:07.199 "peer_address": { 00:17:07.199 "trtype": "TCP", 00:17:07.199 "adrfam": "IPv4", 00:17:07.199 "traddr": "10.0.0.1", 00:17:07.199 "trsvcid": "59264" 00:17:07.199 }, 00:17:07.199 "auth": { 00:17:07.199 "state": "completed", 00:17:07.199 "digest": "sha512", 00:17:07.199 "dhgroup": "ffdhe8192" 00:17:07.199 } 00:17:07.199 } 00:17:07.199 ]' 00:17:07.199 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.459 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.459 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.459 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.459 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.459 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.459 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.459 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.719 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:17:07.719 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:17:08.290 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.290 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.290 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.290 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.290 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.290 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:08.290 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.290 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.290 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.290 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:08.290 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:08.550 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:08.550 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:08.550 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:08.550 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:08.550 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.550 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:08.550 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.550 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.550 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.550 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.550 request: 00:17:08.550 { 00:17:08.550 "name": "nvme0", 00:17:08.550 "trtype": "tcp", 00:17:08.550 "traddr": "10.0.0.2", 00:17:08.550 "adrfam": "ipv4", 00:17:08.550 "trsvcid": "4420", 00:17:08.550 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:08.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:08.550 "prchk_reftag": false, 00:17:08.550 "prchk_guard": false, 00:17:08.550 "hdgst": false, 00:17:08.550 "ddgst": false, 00:17:08.550 "dhchap_key": "key3", 00:17:08.550 "allow_unrecognized_csi": false, 00:17:08.550 "method": "bdev_nvme_attach_controller", 00:17:08.550 "req_id": 1 00:17:08.550 } 00:17:08.550 Got JSON-RPC error response 00:17:08.550 response: 00:17:08.550 { 00:17:08.550 "code": -5, 00:17:08.550 "message": "Input/output error" 00:17:08.550 } 00:17:08.550 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:08.550 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.550 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.550 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.550 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:08.550 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:08.550 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:08.550 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:08.810 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:08.810 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:08.810 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:08.810 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:08.810 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.810 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:08.810 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.810 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.810 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.810 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.070 request: 00:17:09.070 { 00:17:09.070 "name": "nvme0", 00:17:09.070 "trtype": "tcp", 00:17:09.070 "traddr": "10.0.0.2", 00:17:09.070 "adrfam": "ipv4", 00:17:09.070 "trsvcid": "4420", 00:17:09.070 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:09.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:09.070 "prchk_reftag": false, 00:17:09.070 "prchk_guard": false, 00:17:09.070 "hdgst": false, 00:17:09.070 "ddgst": false, 00:17:09.070 "dhchap_key": "key3", 00:17:09.070 "allow_unrecognized_csi": false, 00:17:09.070 "method": "bdev_nvme_attach_controller", 00:17:09.070 "req_id": 1 00:17:09.070 } 00:17:09.070 Got JSON-RPC error response 00:17:09.070 response: 00:17:09.070 { 00:17:09.070 "code": -5, 00:17:09.070 "message": "Input/output error" 00:17:09.070 } 00:17:09.070 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:09.070 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.070 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.070 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.070 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:09.070 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:09.070 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:09.070 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:09.070 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:09.070 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:09.329 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.329 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.329 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.329 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.329 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.329 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.329 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.329 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.329 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.329 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:09.329 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.329 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:09.329 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.329 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:09.329 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.329 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.330 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.330 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.589 request: 00:17:09.589 { 00:17:09.589 "name": "nvme0", 00:17:09.589 "trtype": "tcp", 00:17:09.589 "traddr": "10.0.0.2", 00:17:09.589 "adrfam": "ipv4", 00:17:09.589 "trsvcid": "4420", 00:17:09.589 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:09.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:09.589 "prchk_reftag": false, 00:17:09.589 "prchk_guard": false, 00:17:09.589 "hdgst": false, 00:17:09.589 "ddgst": false, 00:17:09.589 "dhchap_key": "key0", 00:17:09.589 "dhchap_ctrlr_key": "key1", 00:17:09.590 "allow_unrecognized_csi": false, 00:17:09.590 "method": "bdev_nvme_attach_controller", 00:17:09.590 "req_id": 1 00:17:09.590 } 00:17:09.590 Got JSON-RPC error response 00:17:09.590 response: 00:17:09.590 { 00:17:09.590 "code": -5, 00:17:09.590 "message": "Input/output error" 00:17:09.590 } 00:17:09.590 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:09.590 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.590 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.590 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.590 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:09.590 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:09.590 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:09.849 nvme0n1 00:17:09.849 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:09.849 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:09.849 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.113 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.113 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.113 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.375 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:10.375 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.375 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.375 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.375 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:10.375 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:10.375 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:10.944 nvme0n1 00:17:11.203 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:11.203 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.203 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:11.203 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.203 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:11.203 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.203 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.203 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.203 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:11.203 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:11.203 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.463 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.463 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:17:11.463 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: --dhchap-ctrl-secret DHHC-1:03:YzQ5MmMxY2FkNWQ1NzlmYjA3ZmY1ZWEwYTExNzE3ZDY2YzEyNWM1NGYxMTljNTA3YzJjMDQwYzUxZmYxYTMwNQ0st7c=: 00:17:12.031 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:12.031 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:12.031 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:12.031 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:12.031 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:12.031 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:12.031 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:12.031 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.031 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.292 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:12.292 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:12.292 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:12.292 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:12.292 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.292 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:12.292 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.292 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:12.292 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:12.292 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:12.861 request: 00:17:12.861 { 00:17:12.861 "name": "nvme0", 00:17:12.861 "trtype": "tcp", 00:17:12.861 "traddr": "10.0.0.2", 00:17:12.861 "adrfam": "ipv4", 00:17:12.861 "trsvcid": "4420", 00:17:12.861 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:12.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:12.861 "prchk_reftag": false, 00:17:12.861 "prchk_guard": false, 00:17:12.861 "hdgst": false, 00:17:12.861 "ddgst": false, 00:17:12.861 "dhchap_key": "key1", 00:17:12.861 "allow_unrecognized_csi": false, 00:17:12.861 "method": "bdev_nvme_attach_controller", 00:17:12.861 "req_id": 1 00:17:12.861 } 00:17:12.861 Got JSON-RPC error response 00:17:12.861 response: 00:17:12.861 { 00:17:12.861 "code": -5, 00:17:12.861 "message": "Input/output error" 00:17:12.861 } 00:17:12.861 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:12.861 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:12.861 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:12.861 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:12.861 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:12.861 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:12.861 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:13.430 nvme0n1 00:17:13.430 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:13.430 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:13.430 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.690 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.690 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.690 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.950 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:13.950 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.950 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.950 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.950 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:13.950 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:13.950 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:14.209 nvme0n1 00:17:14.209 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:14.209 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:14.209 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.469 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.469 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.469 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.469 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:14.469 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.469 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.469 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.469 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: '' 2s 00:17:14.469 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:14.469 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:14.469 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: 00:17:14.469 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:14.469 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:14.469 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:14.469 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: ]] 00:17:14.469 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NzhjYmE5ZWVkYTJjOTgyNjU2OGU5ZTAwMjg1MTkyODJX69sJ: 00:17:14.469 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:14.469 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:14.469 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: 2s 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: ]] 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NWVlYjNiMDVhZmE3ZTA2ZWFkYmI4MzlhN2NiNjEyNDY3NDcwNDlmM2RiM2MzZjkwGKsgRA==: 00:17:17.010 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:17.011 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:18.912 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:18.912 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:18.912 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:18.912 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:18.912 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:18.912 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:18.912 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:18.912 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.912 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:18.912 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.912 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.912 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.912 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:18.912 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:18.913 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:19.482 nvme0n1 00:17:19.482 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:19.482 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.482 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.482 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.482 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:19.482 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:20.052 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:20.052 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:20.052 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.052 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.052 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.052 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.052 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.052 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.052 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:20.052 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:20.311 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:20.311 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:20.311 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.571 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.571 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:20.571 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.571 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.571 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.571 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:20.571 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:20.571 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:20.571 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:20.571 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.571 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:20.571 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.571 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:20.571 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:21.141 request: 00:17:21.141 { 00:17:21.141 "name": "nvme0", 00:17:21.141 "dhchap_key": "key1", 00:17:21.141 "dhchap_ctrlr_key": "key3", 00:17:21.141 "method": "bdev_nvme_set_keys", 00:17:21.141 "req_id": 1 00:17:21.141 } 00:17:21.141 Got JSON-RPC error response 00:17:21.141 response: 00:17:21.141 { 00:17:21.141 "code": -13, 00:17:21.141 "message": "Permission denied" 00:17:21.141 } 00:17:21.141 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:21.141 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:21.141 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:21.141 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:21.141 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:21.141 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:21.141 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.141 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:21.141 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:22.522 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:22.522 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:22.522 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.522 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:22.522 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:22.522 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.522 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.522 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.522 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:22.522 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:22.522 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:23.092 nvme0n1 00:17:23.092 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:23.092 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.092 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.092 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.092 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:23.092 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:23.092 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:23.092 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:23.352 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.352 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:23.352 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.352 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:23.352 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:23.611 request: 00:17:23.611 { 00:17:23.611 "name": "nvme0", 00:17:23.611 "dhchap_key": "key2", 00:17:23.611 "dhchap_ctrlr_key": "key0", 00:17:23.611 "method": "bdev_nvme_set_keys", 00:17:23.611 "req_id": 1 00:17:23.611 } 00:17:23.611 Got JSON-RPC error response 00:17:23.611 response: 00:17:23.611 { 00:17:23.611 "code": -13, 00:17:23.611 "message": "Permission denied" 00:17:23.611 } 00:17:23.611 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:23.611 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:23.611 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:23.611 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:23.611 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:23.611 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:23.611 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.871 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:23.871 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:24.808 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:24.808 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:24.808 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.067 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:25.067 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:25.067 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:25.067 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1442261 00:17:25.067 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1442261 ']' 00:17:25.067 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1442261 00:17:25.067 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:25.067 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.067 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1442261 00:17:25.067 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:25.067 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:25.067 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1442261' 00:17:25.067 killing process with pid 1442261 00:17:25.067 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1442261 00:17:25.067 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1442261 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:25.636 rmmod nvme_tcp 00:17:25.636 rmmod nvme_fabrics 00:17:25.636 rmmod nvme_keyring 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1463964 ']' 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1463964 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1463964 ']' 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1463964 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1463964 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1463964' 00:17:25.636 killing process with pid 1463964 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1463964 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1463964 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.636 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.176 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:28.176 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.K72 /tmp/spdk.key-sha256.cDt /tmp/spdk.key-sha384.jr8 /tmp/spdk.key-sha512.E5I /tmp/spdk.key-sha512.PiG /tmp/spdk.key-sha384.cf4 /tmp/spdk.key-sha256.T9k '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:28.176 00:17:28.176 real 2m34.248s 00:17:28.176 user 5m55.989s 00:17:28.176 sys 0m24.061s 00:17:28.176 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.176 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.176 ************************************ 00:17:28.176 END TEST nvmf_auth_target 00:17:28.176 ************************************ 00:17:28.176 14:27:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:28.176 14:27:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:28.176 14:27:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:28.176 14:27:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.176 14:27:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:28.176 ************************************ 00:17:28.176 START TEST nvmf_bdevio_no_huge 00:17:28.176 ************************************ 00:17:28.176 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:28.176 * Looking for test storage... 00:17:28.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.176 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:28.176 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:17:28.176 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:28.176 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:28.176 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:28.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.177 --rc genhtml_branch_coverage=1 00:17:28.177 --rc genhtml_function_coverage=1 00:17:28.177 --rc genhtml_legend=1 00:17:28.177 --rc geninfo_all_blocks=1 00:17:28.177 --rc geninfo_unexecuted_blocks=1 00:17:28.177 00:17:28.177 ' 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:28.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.177 --rc genhtml_branch_coverage=1 00:17:28.177 --rc genhtml_function_coverage=1 00:17:28.177 --rc genhtml_legend=1 00:17:28.177 --rc geninfo_all_blocks=1 00:17:28.177 --rc geninfo_unexecuted_blocks=1 00:17:28.177 00:17:28.177 ' 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:28.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.177 --rc genhtml_branch_coverage=1 00:17:28.177 --rc genhtml_function_coverage=1 00:17:28.177 --rc genhtml_legend=1 00:17:28.177 --rc geninfo_all_blocks=1 00:17:28.177 --rc geninfo_unexecuted_blocks=1 00:17:28.177 00:17:28.177 ' 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:28.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.177 --rc genhtml_branch_coverage=1 00:17:28.177 --rc genhtml_function_coverage=1 00:17:28.177 --rc genhtml_legend=1 00:17:28.177 --rc geninfo_all_blocks=1 00:17:28.177 --rc geninfo_unexecuted_blocks=1 00:17:28.177 00:17:28.177 ' 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.177 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:28.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:28.178 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:34.757 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:34.757 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:34.757 Found net devices under 0000:86:00.0: cvl_0_0 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:34.757 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:34.758 Found net devices under 0000:86:00.1: cvl_0_1 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:34.758 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:34.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:17:34.758 00:17:34.758 --- 10.0.0.2 ping statistics --- 00:17:34.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.758 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:34.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:17:34.758 00:17:34.758 --- 10.0.0.1 ping statistics --- 00:17:34.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.758 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1471376 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1471376 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1471376 ']' 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.758 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:34.758 [2024-11-17 14:27:23.184807] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:17:34.758 [2024-11-17 14:27:23.184858] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:34.758 [2024-11-17 14:27:23.274770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:34.758 [2024-11-17 14:27:23.321789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.758 [2024-11-17 14:27:23.321823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.758 [2024-11-17 14:27:23.321829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.758 [2024-11-17 14:27:23.321835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.758 [2024-11-17 14:27:23.321840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.758 [2024-11-17 14:27:23.323040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:34.758 [2024-11-17 14:27:23.323153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:34.758 [2024-11-17 14:27:23.323258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:34.758 [2024-11-17 14:27:23.323259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:35.018 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:35.018 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:35.018 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:35.018 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:35.018 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.018 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.018 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:35.018 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.018 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.018 [2024-11-17 14:27:24.089280] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.018 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.019 Malloc0 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.019 [2024-11-17 14:27:24.133542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:35.019 { 00:17:35.019 "params": { 00:17:35.019 "name": "Nvme$subsystem", 00:17:35.019 "trtype": "$TEST_TRANSPORT", 00:17:35.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.019 "adrfam": "ipv4", 00:17:35.019 "trsvcid": "$NVMF_PORT", 00:17:35.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.019 "hdgst": ${hdgst:-false}, 00:17:35.019 "ddgst": ${ddgst:-false} 00:17:35.019 }, 00:17:35.019 "method": "bdev_nvme_attach_controller" 00:17:35.019 } 00:17:35.019 EOF 00:17:35.019 )") 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:35.019 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:35.019 "params": { 00:17:35.019 "name": "Nvme1", 00:17:35.019 "trtype": "tcp", 00:17:35.019 "traddr": "10.0.0.2", 00:17:35.019 "adrfam": "ipv4", 00:17:35.019 "trsvcid": "4420", 00:17:35.019 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.019 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.019 "hdgst": false, 00:17:35.019 "ddgst": false 00:17:35.019 }, 00:17:35.019 "method": "bdev_nvme_attach_controller" 00:17:35.019 }' 00:17:35.019 [2024-11-17 14:27:24.182745] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:17:35.019 [2024-11-17 14:27:24.182796] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1471625 ] 00:17:35.278 [2024-11-17 14:27:24.265644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:35.278 [2024-11-17 14:27:24.314287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.278 [2024-11-17 14:27:24.314393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.278 [2024-11-17 14:27:24.314393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.539 I/O targets: 00:17:35.539 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:35.539 00:17:35.539 00:17:35.539 CUnit - A unit testing framework for C - Version 2.1-3 00:17:35.539 http://cunit.sourceforge.net/ 00:17:35.539 00:17:35.539 00:17:35.539 Suite: bdevio tests on: Nvme1n1 00:17:35.539 Test: blockdev write read block ...passed 00:17:35.539 Test: blockdev write zeroes read block ...passed 00:17:35.539 Test: blockdev write zeroes read no split ...passed 00:17:35.539 Test: blockdev write zeroes read split ...passed 00:17:35.800 Test: blockdev write zeroes read split partial ...passed 00:17:35.800 Test: blockdev reset ...[2024-11-17 14:27:24.811086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:35.800 [2024-11-17 14:27:24.811153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2399920 (9): Bad file descriptor 00:17:35.800 [2024-11-17 14:27:24.882169] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:35.800 passed 00:17:35.800 Test: blockdev write read 8 blocks ...passed 00:17:35.800 Test: blockdev write read size > 128k ...passed 00:17:35.800 Test: blockdev write read invalid size ...passed 00:17:35.800 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:35.800 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:35.800 Test: blockdev write read max offset ...passed 00:17:36.060 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:36.060 Test: blockdev writev readv 8 blocks ...passed 00:17:36.060 Test: blockdev writev readv 30 x 1block ...passed 00:17:36.060 Test: blockdev writev readv block ...passed 00:17:36.060 Test: blockdev writev readv size > 128k ...passed 00:17:36.060 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:36.060 Test: blockdev comparev and writev ...[2024-11-17 14:27:25.094251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.060 [2024-11-17 14:27:25.094282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.060 [2024-11-17 14:27:25.094296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.060 [2024-11-17 14:27:25.094304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:36.060 [2024-11-17 14:27:25.094550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.060 [2024-11-17 14:27:25.094562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:36.060 [2024-11-17 14:27:25.094573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.060 [2024-11-17 14:27:25.094580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:36.060 [2024-11-17 14:27:25.094800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.060 [2024-11-17 14:27:25.094811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:36.060 [2024-11-17 14:27:25.094822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.060 [2024-11-17 14:27:25.094834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:36.060 [2024-11-17 14:27:25.095056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.060 [2024-11-17 14:27:25.095068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:36.060 [2024-11-17 14:27:25.095081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.060 [2024-11-17 14:27:25.095088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:36.060 passed 00:17:36.060 Test: blockdev nvme passthru rw ...passed 00:17:36.060 Test: blockdev nvme passthru vendor specific ...[2024-11-17 14:27:25.176685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:36.060 [2024-11-17 14:27:25.176703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:36.060 [2024-11-17 14:27:25.176805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:36.060 [2024-11-17 14:27:25.176815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:36.060 [2024-11-17 14:27:25.176917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:36.060 [2024-11-17 14:27:25.176926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:36.060 [2024-11-17 14:27:25.177027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:36.060 [2024-11-17 14:27:25.177037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:36.060 passed 00:17:36.060 Test: blockdev nvme admin passthru ...passed 00:17:36.060 Test: blockdev copy ...passed 00:17:36.060 00:17:36.060 Run Summary: Type Total Ran Passed Failed Inactive 00:17:36.060 suites 1 1 n/a 0 0 00:17:36.060 tests 23 23 23 0 0 00:17:36.060 asserts 152 152 152 0 n/a 00:17:36.060 00:17:36.060 Elapsed time = 1.265 seconds 00:17:36.320 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:36.320 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.320 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.320 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.320 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:36.320 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:36.320 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:36.320 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:36.320 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:36.320 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:36.320 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:36.320 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:36.320 rmmod nvme_tcp 00:17:36.320 rmmod nvme_fabrics 00:17:36.585 rmmod nvme_keyring 00:17:36.585 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:36.585 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:36.585 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:36.585 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1471376 ']' 00:17:36.585 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1471376 00:17:36.586 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1471376 ']' 00:17:36.586 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1471376 00:17:36.586 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:36.586 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.586 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1471376 00:17:36.586 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:36.586 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:36.586 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1471376' 00:17:36.586 killing process with pid 1471376 00:17:36.586 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1471376 00:17:36.586 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1471376 00:17:36.848 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:36.848 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:36.848 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:36.848 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:36.848 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:36.848 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:36.848 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:36.848 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:36.848 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:36.848 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.848 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.848 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.389 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:39.389 00:17:39.389 real 0m11.021s 00:17:39.389 user 0m14.587s 00:17:39.389 sys 0m5.393s 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.389 ************************************ 00:17:39.389 END TEST nvmf_bdevio_no_huge 00:17:39.389 ************************************ 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:39.389 ************************************ 00:17:39.389 START TEST nvmf_tls 00:17:39.389 ************************************ 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:39.389 * Looking for test storage... 00:17:39.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:39.389 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:39.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.390 --rc genhtml_branch_coverage=1 00:17:39.390 --rc genhtml_function_coverage=1 00:17:39.390 --rc genhtml_legend=1 00:17:39.390 --rc geninfo_all_blocks=1 00:17:39.390 --rc geninfo_unexecuted_blocks=1 00:17:39.390 00:17:39.390 ' 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:39.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.390 --rc genhtml_branch_coverage=1 00:17:39.390 --rc genhtml_function_coverage=1 00:17:39.390 --rc genhtml_legend=1 00:17:39.390 --rc geninfo_all_blocks=1 00:17:39.390 --rc geninfo_unexecuted_blocks=1 00:17:39.390 00:17:39.390 ' 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:39.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.390 --rc genhtml_branch_coverage=1 00:17:39.390 --rc genhtml_function_coverage=1 00:17:39.390 --rc genhtml_legend=1 00:17:39.390 --rc geninfo_all_blocks=1 00:17:39.390 --rc geninfo_unexecuted_blocks=1 00:17:39.390 00:17:39.390 ' 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:39.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.390 --rc genhtml_branch_coverage=1 00:17:39.390 --rc genhtml_function_coverage=1 00:17:39.390 --rc genhtml_legend=1 00:17:39.390 --rc geninfo_all_blocks=1 00:17:39.390 --rc geninfo_unexecuted_blocks=1 00:17:39.390 00:17:39.390 ' 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:39.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:39.390 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.970 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:45.970 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:45.970 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:45.970 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:45.970 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:45.970 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:45.970 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:45.970 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:45.971 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:45.971 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:45.971 Found net devices under 0000:86:00.0: cvl_0_0 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:45.971 Found net devices under 0000:86:00.1: cvl_0_1 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:45.971 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:45.971 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:45.971 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:45.971 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:45.971 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:45.971 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:45.971 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:45.971 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:45.971 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:45.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:45.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:17:45.971 00:17:45.971 --- 10.0.0.2 ping statistics --- 00:17:45.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.971 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:17:45.971 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:45.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:45.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:17:45.971 00:17:45.971 --- 10.0.0.1 ping statistics --- 00:17:45.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.972 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1475380 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1475380 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1475380 ']' 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.972 [2024-11-17 14:27:34.276569] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:17:45.972 [2024-11-17 14:27:34.276622] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.972 [2024-11-17 14:27:34.359754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.972 [2024-11-17 14:27:34.400989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.972 [2024-11-17 14:27:34.401026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.972 [2024-11-17 14:27:34.401033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:45.972 [2024-11-17 14:27:34.401039] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:45.972 [2024-11-17 14:27:34.401045] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.972 [2024-11-17 14:27:34.401593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:45.972 true 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:45.972 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:45.972 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:45.972 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:46.232 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:46.232 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:46.232 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:46.232 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:46.232 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:46.492 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:46.492 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:46.492 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:46.492 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:46.752 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:46.752 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:46.752 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:47.011 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:47.011 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:47.011 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:47.011 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:47.011 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:47.271 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:47.271 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.pp7zSJ8e3K 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.1KT3KpzXkv 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.pp7zSJ8e3K 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.1KT3KpzXkv 00:17:47.530 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:47.790 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:48.049 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.pp7zSJ8e3K 00:17:48.049 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pp7zSJ8e3K 00:17:48.049 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:48.050 [2024-11-17 14:27:37.263962] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.309 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:48.309 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:48.568 [2024-11-17 14:27:37.616852] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:48.568 [2024-11-17 14:27:37.617055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.568 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:48.827 malloc0 00:17:48.827 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:48.827 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pp7zSJ8e3K 00:17:49.085 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:49.344 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.pp7zSJ8e3K 00:17:59.343 Initializing NVMe Controllers 00:17:59.343 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:59.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:59.343 Initialization complete. Launching workers. 00:17:59.343 ======================================================== 00:17:59.343 Latency(us) 00:17:59.343 Device Information : IOPS MiB/s Average min max 00:17:59.343 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16336.76 63.82 3917.65 857.37 5227.43 00:17:59.343 ======================================================== 00:17:59.343 Total : 16336.76 63.82 3917.65 857.37 5227.43 00:17:59.343 00:17:59.343 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pp7zSJ8e3K 00:17:59.343 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:59.343 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:59.343 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:59.343 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pp7zSJ8e3K 00:17:59.343 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:59.343 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1477743 00:17:59.343 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:59.344 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:59.344 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1477743 /var/tmp/bdevperf.sock 00:17:59.344 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1477743 ']' 00:17:59.344 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.344 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.344 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.344 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.344 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.344 [2024-11-17 14:27:48.563060] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:17:59.344 [2024-11-17 14:27:48.563110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477743 ] 00:17:59.603 [2024-11-17 14:27:48.637729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.603 [2024-11-17 14:27:48.679415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.603 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.603 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:59.603 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pp7zSJ8e3K 00:17:59.863 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:00.138 [2024-11-17 14:27:49.125744] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:00.138 TLSTESTn1 00:18:00.138 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:00.138 Running I/O for 10 seconds... 00:18:02.148 5305.00 IOPS, 20.72 MiB/s [2024-11-17T13:27:52.750Z] 5395.50 IOPS, 21.08 MiB/s [2024-11-17T13:27:53.318Z] 5408.00 IOPS, 21.12 MiB/s [2024-11-17T13:27:54.694Z] 5447.25 IOPS, 21.28 MiB/s [2024-11-17T13:27:55.629Z] 5461.40 IOPS, 21.33 MiB/s [2024-11-17T13:27:56.565Z] 5401.83 IOPS, 21.10 MiB/s [2024-11-17T13:27:57.502Z] 5395.14 IOPS, 21.07 MiB/s [2024-11-17T13:27:58.440Z] 5419.62 IOPS, 21.17 MiB/s [2024-11-17T13:27:59.376Z] 5434.44 IOPS, 21.23 MiB/s [2024-11-17T13:27:59.376Z] 5437.00 IOPS, 21.24 MiB/s 00:18:10.151 Latency(us) 00:18:10.151 [2024-11-17T13:27:59.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.151 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:10.151 Verification LBA range: start 0x0 length 0x2000 00:18:10.151 TLSTESTn1 : 10.01 5441.93 21.26 0.00 0.00 23487.33 4900.95 22567.18 00:18:10.151 [2024-11-17T13:27:59.376Z] =================================================================================================================== 00:18:10.151 [2024-11-17T13:27:59.376Z] Total : 5441.93 21.26 0.00 0.00 23487.33 4900.95 22567.18 00:18:10.151 { 00:18:10.151 "results": [ 00:18:10.151 { 00:18:10.151 "job": "TLSTESTn1", 00:18:10.151 "core_mask": "0x4", 00:18:10.151 "workload": "verify", 00:18:10.151 "status": "finished", 00:18:10.151 "verify_range": { 00:18:10.151 "start": 0, 00:18:10.151 "length": 8192 00:18:10.151 }, 00:18:10.151 "queue_depth": 128, 00:18:10.151 "io_size": 4096, 00:18:10.151 "runtime": 10.014276, 00:18:10.151 "iops": 5441.931099162835, 00:18:10.151 "mibps": 21.257543356104826, 00:18:10.151 "io_failed": 0, 00:18:10.151 "io_timeout": 0, 00:18:10.151 "avg_latency_us": 23487.331171097572, 00:18:10.151 "min_latency_us": 4900.953043478261, 00:18:10.151 "max_latency_us": 22567.179130434783 00:18:10.151 } 00:18:10.151 ], 00:18:10.151 "core_count": 1 00:18:10.151 } 00:18:10.151 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:10.151 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1477743 00:18:10.151 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1477743 ']' 00:18:10.151 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1477743 00:18:10.151 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:10.151 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1477743 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1477743' 00:18:10.409 killing process with pid 1477743 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1477743 00:18:10.409 Received shutdown signal, test time was about 10.000000 seconds 00:18:10.409 00:18:10.409 Latency(us) 00:18:10.409 [2024-11-17T13:27:59.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.409 [2024-11-17T13:27:59.634Z] =================================================================================================================== 00:18:10.409 [2024-11-17T13:27:59.634Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1477743 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1KT3KpzXkv 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1KT3KpzXkv 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1KT3KpzXkv 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1KT3KpzXkv 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1479586 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1479586 /var/tmp/bdevperf.sock 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1479586 ']' 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.409 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.409 [2024-11-17 14:27:59.621850] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:10.409 [2024-11-17 14:27:59.621896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1479586 ] 00:18:10.668 [2024-11-17 14:27:59.696087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.668 [2024-11-17 14:27:59.737586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.668 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.668 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:10.668 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1KT3KpzXkv 00:18:10.927 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:11.187 [2024-11-17 14:28:00.187692] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:11.187 [2024-11-17 14:28:00.194397] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:11.187 [2024-11-17 14:28:00.195173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b78170 (107): Transport endpoint is not connected 00:18:11.187 [2024-11-17 14:28:00.196166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b78170 (9): Bad file descriptor 00:18:11.187 [2024-11-17 14:28:00.197169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:11.187 [2024-11-17 14:28:00.197181] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:11.187 [2024-11-17 14:28:00.197188] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:11.187 [2024-11-17 14:28:00.197199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:11.187 request: 00:18:11.187 { 00:18:11.187 "name": "TLSTEST", 00:18:11.187 "trtype": "tcp", 00:18:11.187 "traddr": "10.0.0.2", 00:18:11.187 "adrfam": "ipv4", 00:18:11.187 "trsvcid": "4420", 00:18:11.187 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.187 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.187 "prchk_reftag": false, 00:18:11.187 "prchk_guard": false, 00:18:11.187 "hdgst": false, 00:18:11.187 "ddgst": false, 00:18:11.187 "psk": "key0", 00:18:11.187 "allow_unrecognized_csi": false, 00:18:11.187 "method": "bdev_nvme_attach_controller", 00:18:11.187 "req_id": 1 00:18:11.187 } 00:18:11.187 Got JSON-RPC error response 00:18:11.187 response: 00:18:11.187 { 00:18:11.187 "code": -5, 00:18:11.187 "message": "Input/output error" 00:18:11.187 } 00:18:11.187 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1479586 00:18:11.187 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1479586 ']' 00:18:11.187 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1479586 00:18:11.187 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:11.187 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.187 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1479586 00:18:11.187 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:11.187 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:11.187 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1479586' 00:18:11.187 killing process with pid 1479586 00:18:11.187 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1479586 00:18:11.187 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.187 00:18:11.187 Latency(us) 00:18:11.187 [2024-11-17T13:28:00.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.187 [2024-11-17T13:28:00.413Z] =================================================================================================================== 00:18:11.188 [2024-11-17T13:28:00.413Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:11.188 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1479586 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pp7zSJ8e3K 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pp7zSJ8e3K 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pp7zSJ8e3K 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pp7zSJ8e3K 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1479610 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1479610 /var/tmp/bdevperf.sock 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1479610 ']' 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.448 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.448 [2024-11-17 14:28:00.488602] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:11.448 [2024-11-17 14:28:00.488652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1479610 ] 00:18:11.448 [2024-11-17 14:28:00.565929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.448 [2024-11-17 14:28:00.605009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.707 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.707 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:11.707 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pp7zSJ8e3K 00:18:11.707 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:11.967 [2024-11-17 14:28:01.072713] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:11.967 [2024-11-17 14:28:01.077440] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:11.967 [2024-11-17 14:28:01.077461] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:11.967 [2024-11-17 14:28:01.077485] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:11.967 [2024-11-17 14:28:01.078148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149a170 (107): Transport endpoint is not connected 00:18:11.967 [2024-11-17 14:28:01.079140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149a170 (9): Bad file descriptor 00:18:11.967 [2024-11-17 14:28:01.080142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:11.967 [2024-11-17 14:28:01.080152] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:11.967 [2024-11-17 14:28:01.080159] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:11.967 [2024-11-17 14:28:01.080172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:11.967 request: 00:18:11.967 { 00:18:11.967 "name": "TLSTEST", 00:18:11.967 "trtype": "tcp", 00:18:11.967 "traddr": "10.0.0.2", 00:18:11.967 "adrfam": "ipv4", 00:18:11.967 "trsvcid": "4420", 00:18:11.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.967 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:11.967 "prchk_reftag": false, 00:18:11.967 "prchk_guard": false, 00:18:11.967 "hdgst": false, 00:18:11.967 "ddgst": false, 00:18:11.967 "psk": "key0", 00:18:11.967 "allow_unrecognized_csi": false, 00:18:11.967 "method": "bdev_nvme_attach_controller", 00:18:11.967 "req_id": 1 00:18:11.967 } 00:18:11.967 Got JSON-RPC error response 00:18:11.967 response: 00:18:11.967 { 00:18:11.967 "code": -5, 00:18:11.967 "message": "Input/output error" 00:18:11.967 } 00:18:11.967 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1479610 00:18:11.967 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1479610 ']' 00:18:11.967 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1479610 00:18:11.967 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:11.968 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.968 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1479610 00:18:11.968 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:11.968 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:11.968 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1479610' 00:18:11.968 killing process with pid 1479610 00:18:11.968 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1479610 00:18:11.968 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.968 00:18:11.968 Latency(us) 00:18:11.968 [2024-11-17T13:28:01.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.968 [2024-11-17T13:28:01.193Z] =================================================================================================================== 00:18:11.968 [2024-11-17T13:28:01.193Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:11.968 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1479610 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pp7zSJ8e3K 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pp7zSJ8e3K 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pp7zSJ8e3K 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pp7zSJ8e3K 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1479838 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1479838 /var/tmp/bdevperf.sock 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1479838 ']' 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:12.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.228 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.228 [2024-11-17 14:28:01.356608] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:12.228 [2024-11-17 14:28:01.356661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1479838 ] 00:18:12.228 [2024-11-17 14:28:01.432289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.487 [2024-11-17 14:28:01.469121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.487 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.488 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:12.488 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pp7zSJ8e3K 00:18:12.747 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:12.747 [2024-11-17 14:28:01.927291] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:12.747 [2024-11-17 14:28:01.938689] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:12.747 [2024-11-17 14:28:01.938710] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:12.747 [2024-11-17 14:28:01.938733] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:12.747 [2024-11-17 14:28:01.939663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cce170 (107): Transport endpoint is not connected 00:18:12.747 [2024-11-17 14:28:01.940657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cce170 (9): Bad file descriptor 00:18:12.747 [2024-11-17 14:28:01.941659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:12.747 [2024-11-17 14:28:01.941674] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:12.747 [2024-11-17 14:28:01.941686] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:12.747 [2024-11-17 14:28:01.941697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:12.747 request: 00:18:12.747 { 00:18:12.747 "name": "TLSTEST", 00:18:12.747 "trtype": "tcp", 00:18:12.747 "traddr": "10.0.0.2", 00:18:12.747 "adrfam": "ipv4", 00:18:12.747 "trsvcid": "4420", 00:18:12.747 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:12.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:12.747 "prchk_reftag": false, 00:18:12.747 "prchk_guard": false, 00:18:12.747 "hdgst": false, 00:18:12.747 "ddgst": false, 00:18:12.747 "psk": "key0", 00:18:12.747 "allow_unrecognized_csi": false, 00:18:12.747 "method": "bdev_nvme_attach_controller", 00:18:12.747 "req_id": 1 00:18:12.747 } 00:18:12.747 Got JSON-RPC error response 00:18:12.747 response: 00:18:12.747 { 00:18:12.747 "code": -5, 00:18:12.747 "message": "Input/output error" 00:18:12.747 } 00:18:12.747 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1479838 00:18:13.006 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1479838 ']' 00:18:13.006 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1479838 00:18:13.006 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:13.006 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.006 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1479838 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1479838' 00:18:13.006 killing process with pid 1479838 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1479838 00:18:13.006 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.006 00:18:13.006 Latency(us) 00:18:13.006 [2024-11-17T13:28:02.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.006 [2024-11-17T13:28:02.231Z] =================================================================================================================== 00:18:13.006 [2024-11-17T13:28:02.231Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1479838 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1480042 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1480042 /var/tmp/bdevperf.sock 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1480042 ']' 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.006 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.006 [2024-11-17 14:28:02.218840] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:13.006 [2024-11-17 14:28:02.218890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480042 ] 00:18:13.265 [2024-11-17 14:28:02.293701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.265 [2024-11-17 14:28:02.332137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.265 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.265 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:13.265 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:13.525 [2024-11-17 14:28:02.586684] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:13.525 [2024-11-17 14:28:02.586719] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:13.525 request: 00:18:13.525 { 00:18:13.526 "name": "key0", 00:18:13.526 "path": "", 00:18:13.526 "method": "keyring_file_add_key", 00:18:13.526 "req_id": 1 00:18:13.526 } 00:18:13.526 Got JSON-RPC error response 00:18:13.526 response: 00:18:13.526 { 00:18:13.526 "code": -1, 00:18:13.526 "message": "Operation not permitted" 00:18:13.526 } 00:18:13.526 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:13.785 [2024-11-17 14:28:02.775267] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:13.785 [2024-11-17 14:28:02.775297] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:13.785 request: 00:18:13.785 { 00:18:13.785 "name": "TLSTEST", 00:18:13.785 "trtype": "tcp", 00:18:13.785 "traddr": "10.0.0.2", 00:18:13.785 "adrfam": "ipv4", 00:18:13.785 "trsvcid": "4420", 00:18:13.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:13.785 "prchk_reftag": false, 00:18:13.785 "prchk_guard": false, 00:18:13.785 "hdgst": false, 00:18:13.785 "ddgst": false, 00:18:13.785 "psk": "key0", 00:18:13.785 "allow_unrecognized_csi": false, 00:18:13.785 "method": "bdev_nvme_attach_controller", 00:18:13.785 "req_id": 1 00:18:13.785 } 00:18:13.785 Got JSON-RPC error response 00:18:13.785 response: 00:18:13.785 { 00:18:13.785 "code": -126, 00:18:13.785 "message": "Required key not available" 00:18:13.785 } 00:18:13.785 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1480042 00:18:13.785 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1480042 ']' 00:18:13.785 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1480042 00:18:13.785 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:13.785 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.785 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1480042 00:18:13.785 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:13.785 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:13.785 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1480042' 00:18:13.785 killing process with pid 1480042 00:18:13.785 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1480042 00:18:13.785 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.785 00:18:13.785 Latency(us) 00:18:13.785 [2024-11-17T13:28:03.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.785 [2024-11-17T13:28:03.010Z] =================================================================================================================== 00:18:13.785 [2024-11-17T13:28:03.010Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:13.785 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1480042 00:18:13.785 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:13.785 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:13.785 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:13.785 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:13.785 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:13.785 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1475380 00:18:13.785 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1475380 ']' 00:18:13.785 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1475380 00:18:13.785 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:13.785 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.045 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1475380 00:18:14.045 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:14.045 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:14.045 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1475380' 00:18:14.045 killing process with pid 1475380 00:18:14.045 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1475380 00:18:14.045 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1475380 00:18:14.045 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:14.045 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:14.045 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:14.045 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:14.045 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:14.045 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:14.045 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:14.046 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:14.046 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:14.046 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.V4UOsF7x0p 00:18:14.046 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:14.046 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.V4UOsF7x0p 00:18:14.046 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:14.046 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:14.046 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:14.046 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.305 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:14.305 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1480117 00:18:14.305 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1480117 00:18:14.305 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1480117 ']' 00:18:14.305 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.305 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.305 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.305 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.305 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.305 [2024-11-17 14:28:03.308890] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:14.305 [2024-11-17 14:28:03.308936] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.305 [2024-11-17 14:28:03.390283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.305 [2024-11-17 14:28:03.430020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.306 [2024-11-17 14:28:03.430053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.306 [2024-11-17 14:28:03.430061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.306 [2024-11-17 14:28:03.430067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.306 [2024-11-17 14:28:03.430073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.306 [2024-11-17 14:28:03.430658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.565 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.565 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:14.565 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:14.565 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:14.565 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.565 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.565 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.V4UOsF7x0p 00:18:14.565 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.V4UOsF7x0p 00:18:14.565 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:14.565 [2024-11-17 14:28:03.750064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.565 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:14.824 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:15.084 [2024-11-17 14:28:04.119020] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:15.084 [2024-11-17 14:28:04.119228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.084 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:15.343 malloc0 00:18:15.343 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:15.343 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.V4UOsF7x0p 00:18:15.603 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:15.862 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.V4UOsF7x0p 00:18:15.862 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:15.862 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:15.862 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:15.862 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.V4UOsF7x0p 00:18:15.862 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:15.862 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:15.862 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1480483 00:18:15.862 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:15.862 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1480483 /var/tmp/bdevperf.sock 00:18:15.862 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1480483 ']' 00:18:15.862 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.862 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.862 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.862 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.862 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.862 [2024-11-17 14:28:04.916604] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:15.862 [2024-11-17 14:28:04.916651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480483 ] 00:18:15.862 [2024-11-17 14:28:04.990702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.862 [2024-11-17 14:28:05.032959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.122 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.122 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:16.122 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.V4UOsF7x0p 00:18:16.122 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:16.381 [2024-11-17 14:28:05.471214] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:16.381 TLSTESTn1 00:18:16.381 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:16.641 Running I/O for 10 seconds... 00:18:18.516 5005.00 IOPS, 19.55 MiB/s [2024-11-17T13:28:08.678Z] 5242.00 IOPS, 20.48 MiB/s [2024-11-17T13:28:10.057Z] 5347.33 IOPS, 20.89 MiB/s [2024-11-17T13:28:10.994Z] 5333.50 IOPS, 20.83 MiB/s [2024-11-17T13:28:11.933Z] 5364.20 IOPS, 20.95 MiB/s [2024-11-17T13:28:12.870Z] 5377.50 IOPS, 21.01 MiB/s [2024-11-17T13:28:13.808Z] 5403.71 IOPS, 21.11 MiB/s [2024-11-17T13:28:14.746Z] 5403.25 IOPS, 21.11 MiB/s [2024-11-17T13:28:15.683Z] 5415.89 IOPS, 21.16 MiB/s [2024-11-17T13:28:15.942Z] 5383.60 IOPS, 21.03 MiB/s 00:18:26.717 Latency(us) 00:18:26.717 [2024-11-17T13:28:15.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.717 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:26.717 Verification LBA range: start 0x0 length 0x2000 00:18:26.717 TLSTESTn1 : 10.02 5386.04 21.04 0.00 0.00 23728.66 5584.81 37611.97 00:18:26.717 [2024-11-17T13:28:15.942Z] =================================================================================================================== 00:18:26.717 [2024-11-17T13:28:15.942Z] Total : 5386.04 21.04 0.00 0.00 23728.66 5584.81 37611.97 00:18:26.717 { 00:18:26.717 "results": [ 00:18:26.717 { 00:18:26.717 "job": "TLSTESTn1", 00:18:26.717 "core_mask": "0x4", 00:18:26.717 "workload": "verify", 00:18:26.717 "status": "finished", 00:18:26.717 "verify_range": { 00:18:26.717 "start": 0, 00:18:26.717 "length": 8192 00:18:26.717 }, 00:18:26.717 "queue_depth": 128, 00:18:26.717 "io_size": 4096, 00:18:26.717 "runtime": 10.019236, 00:18:26.717 "iops": 5386.03941458211, 00:18:26.717 "mibps": 21.039216463211368, 00:18:26.717 "io_failed": 0, 00:18:26.717 "io_timeout": 0, 00:18:26.717 "avg_latency_us": 23728.664874554048, 00:18:26.717 "min_latency_us": 5584.806956521739, 00:18:26.717 "max_latency_us": 37611.965217391305 00:18:26.717 } 00:18:26.717 ], 00:18:26.717 "core_count": 1 00:18:26.717 } 00:18:26.717 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:26.717 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1480483 00:18:26.717 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1480483 ']' 00:18:26.717 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1480483 00:18:26.717 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:26.717 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.717 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1480483 00:18:26.717 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:26.717 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:26.717 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1480483' 00:18:26.717 killing process with pid 1480483 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1480483 00:18:26.718 Received shutdown signal, test time was about 10.000000 seconds 00:18:26.718 00:18:26.718 Latency(us) 00:18:26.718 [2024-11-17T13:28:15.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.718 [2024-11-17T13:28:15.943Z] =================================================================================================================== 00:18:26.718 [2024-11-17T13:28:15.943Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1480483 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.V4UOsF7x0p 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.V4UOsF7x0p 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.V4UOsF7x0p 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.V4UOsF7x0p 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.V4UOsF7x0p 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1482194 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1482194 /var/tmp/bdevperf.sock 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1482194 ']' 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:26.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.718 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.977 [2024-11-17 14:28:15.968663] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:26.977 [2024-11-17 14:28:15.968710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482194 ] 00:18:26.977 [2024-11-17 14:28:16.039144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.977 [2024-11-17 14:28:16.081295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.977 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.977 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:26.977 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.V4UOsF7x0p 00:18:27.236 [2024-11-17 14:28:16.343275] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.V4UOsF7x0p': 0100666 00:18:27.236 [2024-11-17 14:28:16.343303] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:27.236 request: 00:18:27.236 { 00:18:27.236 "name": "key0", 00:18:27.236 "path": "/tmp/tmp.V4UOsF7x0p", 00:18:27.236 "method": "keyring_file_add_key", 00:18:27.236 "req_id": 1 00:18:27.236 } 00:18:27.236 Got JSON-RPC error response 00:18:27.236 response: 00:18:27.236 { 00:18:27.236 "code": -1, 00:18:27.236 "message": "Operation not permitted" 00:18:27.236 } 00:18:27.236 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:27.496 [2024-11-17 14:28:16.535857] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:27.496 [2024-11-17 14:28:16.535891] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:27.496 request: 00:18:27.496 { 00:18:27.496 "name": "TLSTEST", 00:18:27.496 "trtype": "tcp", 00:18:27.496 "traddr": "10.0.0.2", 00:18:27.496 "adrfam": "ipv4", 00:18:27.496 "trsvcid": "4420", 00:18:27.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.496 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:27.496 "prchk_reftag": false, 00:18:27.496 "prchk_guard": false, 00:18:27.496 "hdgst": false, 00:18:27.496 "ddgst": false, 00:18:27.496 "psk": "key0", 00:18:27.496 "allow_unrecognized_csi": false, 00:18:27.496 "method": "bdev_nvme_attach_controller", 00:18:27.496 "req_id": 1 00:18:27.496 } 00:18:27.496 Got JSON-RPC error response 00:18:27.496 response: 00:18:27.496 { 00:18:27.496 "code": -126, 00:18:27.496 "message": "Required key not available" 00:18:27.496 } 00:18:27.496 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1482194 00:18:27.496 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1482194 ']' 00:18:27.496 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1482194 00:18:27.496 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:27.496 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.496 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1482194 00:18:27.496 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:27.496 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:27.496 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1482194' 00:18:27.496 killing process with pid 1482194 00:18:27.496 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1482194 00:18:27.496 Received shutdown signal, test time was about 10.000000 seconds 00:18:27.496 00:18:27.496 Latency(us) 00:18:27.496 [2024-11-17T13:28:16.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.496 [2024-11-17T13:28:16.721Z] =================================================================================================================== 00:18:27.496 [2024-11-17T13:28:16.721Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:27.496 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1482194 00:18:27.755 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:27.755 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:27.755 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.755 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:27.755 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.755 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1480117 00:18:27.755 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1480117 ']' 00:18:27.755 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1480117 00:18:27.755 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:27.755 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.755 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1480117 00:18:27.755 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:27.755 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:27.755 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1480117' 00:18:27.755 killing process with pid 1480117 00:18:27.755 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1480117 00:18:27.755 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1480117 00:18:28.015 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:28.015 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.015 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.015 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.015 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1482434 00:18:28.015 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:28.015 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1482434 00:18:28.015 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1482434 ']' 00:18:28.015 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.015 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.015 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.015 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.015 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.015 [2024-11-17 14:28:17.027177] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:28.015 [2024-11-17 14:28:17.027223] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.015 [2024-11-17 14:28:17.103945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.015 [2024-11-17 14:28:17.138262] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.015 [2024-11-17 14:28:17.138296] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.015 [2024-11-17 14:28:17.138302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.015 [2024-11-17 14:28:17.138308] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.015 [2024-11-17 14:28:17.138313] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.015 [2024-11-17 14:28:17.138904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.274 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.274 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:28.274 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:28.274 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.274 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.274 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.274 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.V4UOsF7x0p 00:18:28.274 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:28.274 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.V4UOsF7x0p 00:18:28.274 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:28.274 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.274 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:28.274 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.274 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.V4UOsF7x0p 00:18:28.274 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.V4UOsF7x0p 00:18:28.274 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:28.274 [2024-11-17 14:28:17.461940] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.274 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:28.533 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:28.792 [2024-11-17 14:28:17.850943] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:28.792 [2024-11-17 14:28:17.851148] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.792 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:29.050 malloc0 00:18:29.050 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:29.050 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.V4UOsF7x0p 00:18:29.309 [2024-11-17 14:28:18.432319] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.V4UOsF7x0p': 0100666 00:18:29.309 [2024-11-17 14:28:18.432347] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:29.309 request: 00:18:29.309 { 00:18:29.309 "name": "key0", 00:18:29.309 "path": "/tmp/tmp.V4UOsF7x0p", 00:18:29.310 "method": "keyring_file_add_key", 00:18:29.310 "req_id": 1 00:18:29.310 } 00:18:29.310 Got JSON-RPC error response 00:18:29.310 response: 00:18:29.310 { 00:18:29.310 "code": -1, 00:18:29.310 "message": "Operation not permitted" 00:18:29.310 } 00:18:29.310 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:29.569 [2024-11-17 14:28:18.628847] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:29.569 [2024-11-17 14:28:18.628875] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:29.569 request: 00:18:29.569 { 00:18:29.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.569 "host": "nqn.2016-06.io.spdk:host1", 00:18:29.569 "psk": "key0", 00:18:29.569 "method": "nvmf_subsystem_add_host", 00:18:29.569 "req_id": 1 00:18:29.569 } 00:18:29.569 Got JSON-RPC error response 00:18:29.569 response: 00:18:29.569 { 00:18:29.569 "code": -32603, 00:18:29.569 "message": "Internal error" 00:18:29.569 } 00:18:29.569 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:29.569 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:29.569 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:29.569 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:29.569 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1482434 00:18:29.569 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1482434 ']' 00:18:29.569 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1482434 00:18:29.569 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:29.569 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:29.569 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1482434 00:18:29.569 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:29.569 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:29.569 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1482434' 00:18:29.569 killing process with pid 1482434 00:18:29.569 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1482434 00:18:29.569 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1482434 00:18:29.828 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.V4UOsF7x0p 00:18:29.828 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:29.828 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:29.828 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:29.828 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.828 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1482716 00:18:29.828 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:29.828 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1482716 00:18:29.828 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1482716 ']' 00:18:29.828 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.828 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.828 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.828 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.828 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.828 [2024-11-17 14:28:18.941541] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:29.828 [2024-11-17 14:28:18.941586] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.828 [2024-11-17 14:28:19.022427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.087 [2024-11-17 14:28:19.061609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.087 [2024-11-17 14:28:19.061643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.087 [2024-11-17 14:28:19.061653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.087 [2024-11-17 14:28:19.061659] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.087 [2024-11-17 14:28:19.061665] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.087 [2024-11-17 14:28:19.062228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.087 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.087 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:30.087 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:30.087 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:30.087 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.087 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.087 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.V4UOsF7x0p 00:18:30.087 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.V4UOsF7x0p 00:18:30.087 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:30.346 [2024-11-17 14:28:19.378497] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.346 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:30.604 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:30.604 [2024-11-17 14:28:19.755461] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:30.604 [2024-11-17 14:28:19.755674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.604 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:30.863 malloc0 00:18:30.863 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:31.122 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.V4UOsF7x0p 00:18:31.382 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:31.382 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1483105 00:18:31.382 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:31.382 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:31.382 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1483105 /var/tmp/bdevperf.sock 00:18:31.382 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1483105 ']' 00:18:31.382 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.382 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.382 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.382 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.382 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.382 [2024-11-17 14:28:20.593660] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:31.382 [2024-11-17 14:28:20.593712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1483105 ] 00:18:31.641 [2024-11-17 14:28:20.671055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.641 [2024-11-17 14:28:20.713257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.641 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.641 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:31.641 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.V4UOsF7x0p 00:18:31.901 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:32.160 [2024-11-17 14:28:21.181438] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:32.160 TLSTESTn1 00:18:32.160 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:32.420 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:32.420 "subsystems": [ 00:18:32.420 { 00:18:32.420 "subsystem": "keyring", 00:18:32.420 "config": [ 00:18:32.420 { 00:18:32.420 "method": "keyring_file_add_key", 00:18:32.420 "params": { 00:18:32.420 "name": "key0", 00:18:32.420 "path": "/tmp/tmp.V4UOsF7x0p" 00:18:32.420 } 00:18:32.420 } 00:18:32.420 ] 00:18:32.420 }, 00:18:32.420 { 00:18:32.420 "subsystem": "iobuf", 00:18:32.420 "config": [ 00:18:32.420 { 00:18:32.420 "method": "iobuf_set_options", 00:18:32.420 "params": { 00:18:32.420 "small_pool_count": 8192, 00:18:32.420 "large_pool_count": 1024, 00:18:32.420 "small_bufsize": 8192, 00:18:32.420 "large_bufsize": 135168, 00:18:32.420 "enable_numa": false 00:18:32.420 } 00:18:32.420 } 00:18:32.420 ] 00:18:32.420 }, 00:18:32.420 { 00:18:32.420 "subsystem": "sock", 00:18:32.420 "config": [ 00:18:32.420 { 00:18:32.420 "method": "sock_set_default_impl", 00:18:32.420 "params": { 00:18:32.420 "impl_name": "posix" 00:18:32.420 } 00:18:32.420 }, 00:18:32.420 { 00:18:32.420 "method": "sock_impl_set_options", 00:18:32.420 "params": { 00:18:32.420 "impl_name": "ssl", 00:18:32.420 "recv_buf_size": 4096, 00:18:32.420 "send_buf_size": 4096, 00:18:32.420 "enable_recv_pipe": true, 00:18:32.420 "enable_quickack": false, 00:18:32.420 "enable_placement_id": 0, 00:18:32.420 "enable_zerocopy_send_server": true, 00:18:32.420 "enable_zerocopy_send_client": false, 00:18:32.420 "zerocopy_threshold": 0, 00:18:32.420 "tls_version": 0, 00:18:32.420 "enable_ktls": false 00:18:32.420 } 00:18:32.420 }, 00:18:32.420 { 00:18:32.420 "method": "sock_impl_set_options", 00:18:32.420 "params": { 00:18:32.420 "impl_name": "posix", 00:18:32.420 "recv_buf_size": 2097152, 00:18:32.420 "send_buf_size": 2097152, 00:18:32.420 "enable_recv_pipe": true, 00:18:32.420 "enable_quickack": false, 00:18:32.420 "enable_placement_id": 0, 00:18:32.420 "enable_zerocopy_send_server": true, 00:18:32.420 "enable_zerocopy_send_client": false, 00:18:32.420 "zerocopy_threshold": 0, 00:18:32.420 "tls_version": 0, 00:18:32.420 "enable_ktls": false 00:18:32.420 } 00:18:32.420 } 00:18:32.420 ] 00:18:32.420 }, 00:18:32.420 { 00:18:32.420 "subsystem": "vmd", 00:18:32.420 "config": [] 00:18:32.420 }, 00:18:32.420 { 00:18:32.420 "subsystem": "accel", 00:18:32.420 "config": [ 00:18:32.420 { 00:18:32.420 "method": "accel_set_options", 00:18:32.420 "params": { 00:18:32.420 "small_cache_size": 128, 00:18:32.420 "large_cache_size": 16, 00:18:32.420 "task_count": 2048, 00:18:32.420 "sequence_count": 2048, 00:18:32.420 "buf_count": 2048 00:18:32.420 } 00:18:32.420 } 00:18:32.420 ] 00:18:32.420 }, 00:18:32.420 { 00:18:32.420 "subsystem": "bdev", 00:18:32.420 "config": [ 00:18:32.420 { 00:18:32.420 "method": "bdev_set_options", 00:18:32.420 "params": { 00:18:32.420 "bdev_io_pool_size": 65535, 00:18:32.420 "bdev_io_cache_size": 256, 00:18:32.420 "bdev_auto_examine": true, 00:18:32.420 "iobuf_small_cache_size": 128, 00:18:32.420 "iobuf_large_cache_size": 16 00:18:32.420 } 00:18:32.420 }, 00:18:32.420 { 00:18:32.420 "method": "bdev_raid_set_options", 00:18:32.420 "params": { 00:18:32.420 "process_window_size_kb": 1024, 00:18:32.420 "process_max_bandwidth_mb_sec": 0 00:18:32.420 } 00:18:32.420 }, 00:18:32.420 { 00:18:32.420 "method": "bdev_iscsi_set_options", 00:18:32.420 "params": { 00:18:32.420 "timeout_sec": 30 00:18:32.420 } 00:18:32.420 }, 00:18:32.420 { 00:18:32.420 "method": "bdev_nvme_set_options", 00:18:32.420 "params": { 00:18:32.420 "action_on_timeout": "none", 00:18:32.420 "timeout_us": 0, 00:18:32.420 "timeout_admin_us": 0, 00:18:32.420 "keep_alive_timeout_ms": 10000, 00:18:32.420 "arbitration_burst": 0, 00:18:32.420 "low_priority_weight": 0, 00:18:32.420 "medium_priority_weight": 0, 00:18:32.420 "high_priority_weight": 0, 00:18:32.420 "nvme_adminq_poll_period_us": 10000, 00:18:32.420 "nvme_ioq_poll_period_us": 0, 00:18:32.420 "io_queue_requests": 0, 00:18:32.420 "delay_cmd_submit": true, 00:18:32.420 "transport_retry_count": 4, 00:18:32.420 "bdev_retry_count": 3, 00:18:32.420 "transport_ack_timeout": 0, 00:18:32.420 "ctrlr_loss_timeout_sec": 0, 00:18:32.420 "reconnect_delay_sec": 0, 00:18:32.420 "fast_io_fail_timeout_sec": 0, 00:18:32.420 "disable_auto_failback": false, 00:18:32.420 "generate_uuids": false, 00:18:32.420 "transport_tos": 0, 00:18:32.420 "nvme_error_stat": false, 00:18:32.420 "rdma_srq_size": 0, 00:18:32.420 "io_path_stat": false, 00:18:32.420 "allow_accel_sequence": false, 00:18:32.420 "rdma_max_cq_size": 0, 00:18:32.420 "rdma_cm_event_timeout_ms": 0, 00:18:32.420 "dhchap_digests": [ 00:18:32.420 "sha256", 00:18:32.420 "sha384", 00:18:32.420 "sha512" 00:18:32.420 ], 00:18:32.420 "dhchap_dhgroups": [ 00:18:32.420 "null", 00:18:32.420 "ffdhe2048", 00:18:32.420 "ffdhe3072", 00:18:32.420 "ffdhe4096", 00:18:32.420 "ffdhe6144", 00:18:32.420 "ffdhe8192" 00:18:32.420 ] 00:18:32.420 } 00:18:32.420 }, 00:18:32.420 { 00:18:32.420 "method": "bdev_nvme_set_hotplug", 00:18:32.420 "params": { 00:18:32.420 "period_us": 100000, 00:18:32.420 "enable": false 00:18:32.420 } 00:18:32.420 }, 00:18:32.420 { 00:18:32.420 "method": "bdev_malloc_create", 00:18:32.420 "params": { 00:18:32.420 "name": "malloc0", 00:18:32.420 "num_blocks": 8192, 00:18:32.420 "block_size": 4096, 00:18:32.420 "physical_block_size": 4096, 00:18:32.420 "uuid": "c91c5340-14ce-40b8-af54-b3473108e9b2", 00:18:32.420 "optimal_io_boundary": 0, 00:18:32.420 "md_size": 0, 00:18:32.420 "dif_type": 0, 00:18:32.420 "dif_is_head_of_md": false, 00:18:32.420 "dif_pi_format": 0 00:18:32.420 } 00:18:32.420 }, 00:18:32.420 { 00:18:32.420 "method": "bdev_wait_for_examine" 00:18:32.420 } 00:18:32.420 ] 00:18:32.420 }, 00:18:32.420 { 00:18:32.420 "subsystem": "nbd", 00:18:32.420 "config": [] 00:18:32.420 }, 00:18:32.420 { 00:18:32.420 "subsystem": "scheduler", 00:18:32.420 "config": [ 00:18:32.420 { 00:18:32.420 "method": "framework_set_scheduler", 00:18:32.420 "params": { 00:18:32.420 "name": "static" 00:18:32.420 } 00:18:32.420 } 00:18:32.420 ] 00:18:32.420 }, 00:18:32.420 { 00:18:32.420 "subsystem": "nvmf", 00:18:32.420 "config": [ 00:18:32.420 { 00:18:32.420 "method": "nvmf_set_config", 00:18:32.420 "params": { 00:18:32.420 "discovery_filter": "match_any", 00:18:32.420 "admin_cmd_passthru": { 00:18:32.420 "identify_ctrlr": false 00:18:32.420 }, 00:18:32.420 "dhchap_digests": [ 00:18:32.420 "sha256", 00:18:32.420 "sha384", 00:18:32.420 "sha512" 00:18:32.420 ], 00:18:32.420 "dhchap_dhgroups": [ 00:18:32.420 "null", 00:18:32.420 "ffdhe2048", 00:18:32.420 "ffdhe3072", 00:18:32.420 "ffdhe4096", 00:18:32.420 "ffdhe6144", 00:18:32.420 "ffdhe8192" 00:18:32.420 ] 00:18:32.420 } 00:18:32.420 }, 00:18:32.420 { 00:18:32.420 "method": "nvmf_set_max_subsystems", 00:18:32.420 "params": { 00:18:32.420 "max_subsystems": 1024 00:18:32.420 } 00:18:32.420 }, 00:18:32.420 { 00:18:32.420 "method": "nvmf_set_crdt", 00:18:32.421 "params": { 00:18:32.421 "crdt1": 0, 00:18:32.421 "crdt2": 0, 00:18:32.421 "crdt3": 0 00:18:32.421 } 00:18:32.421 }, 00:18:32.421 { 00:18:32.421 "method": "nvmf_create_transport", 00:18:32.421 "params": { 00:18:32.421 "trtype": "TCP", 00:18:32.421 "max_queue_depth": 128, 00:18:32.421 "max_io_qpairs_per_ctrlr": 127, 00:18:32.421 "in_capsule_data_size": 4096, 00:18:32.421 "max_io_size": 131072, 00:18:32.421 "io_unit_size": 131072, 00:18:32.421 "max_aq_depth": 128, 00:18:32.421 "num_shared_buffers": 511, 00:18:32.421 "buf_cache_size": 4294967295, 00:18:32.421 "dif_insert_or_strip": false, 00:18:32.421 "zcopy": false, 00:18:32.421 "c2h_success": false, 00:18:32.421 "sock_priority": 0, 00:18:32.421 "abort_timeout_sec": 1, 00:18:32.421 "ack_timeout": 0, 00:18:32.421 "data_wr_pool_size": 0 00:18:32.421 } 00:18:32.421 }, 00:18:32.421 { 00:18:32.421 "method": "nvmf_create_subsystem", 00:18:32.421 "params": { 00:18:32.421 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.421 "allow_any_host": false, 00:18:32.421 "serial_number": "SPDK00000000000001", 00:18:32.421 "model_number": "SPDK bdev Controller", 00:18:32.421 "max_namespaces": 10, 00:18:32.421 "min_cntlid": 1, 00:18:32.421 "max_cntlid": 65519, 00:18:32.421 "ana_reporting": false 00:18:32.421 } 00:18:32.421 }, 00:18:32.421 { 00:18:32.421 "method": "nvmf_subsystem_add_host", 00:18:32.421 "params": { 00:18:32.421 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.421 "host": "nqn.2016-06.io.spdk:host1", 00:18:32.421 "psk": "key0" 00:18:32.421 } 00:18:32.421 }, 00:18:32.421 { 00:18:32.421 "method": "nvmf_subsystem_add_ns", 00:18:32.421 "params": { 00:18:32.421 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.421 "namespace": { 00:18:32.421 "nsid": 1, 00:18:32.421 "bdev_name": "malloc0", 00:18:32.421 "nguid": "C91C534014CE40B8AF54B3473108E9B2", 00:18:32.421 "uuid": "c91c5340-14ce-40b8-af54-b3473108e9b2", 00:18:32.421 "no_auto_visible": false 00:18:32.421 } 00:18:32.421 } 00:18:32.421 }, 00:18:32.421 { 00:18:32.421 "method": "nvmf_subsystem_add_listener", 00:18:32.421 "params": { 00:18:32.421 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.421 "listen_address": { 00:18:32.421 "trtype": "TCP", 00:18:32.421 "adrfam": "IPv4", 00:18:32.421 "traddr": "10.0.0.2", 00:18:32.421 "trsvcid": "4420" 00:18:32.421 }, 00:18:32.421 "secure_channel": true 00:18:32.421 } 00:18:32.421 } 00:18:32.421 ] 00:18:32.421 } 00:18:32.421 ] 00:18:32.421 }' 00:18:32.421 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:32.681 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:32.681 "subsystems": [ 00:18:32.681 { 00:18:32.681 "subsystem": "keyring", 00:18:32.681 "config": [ 00:18:32.681 { 00:18:32.681 "method": "keyring_file_add_key", 00:18:32.681 "params": { 00:18:32.681 "name": "key0", 00:18:32.681 "path": "/tmp/tmp.V4UOsF7x0p" 00:18:32.681 } 00:18:32.681 } 00:18:32.681 ] 00:18:32.681 }, 00:18:32.681 { 00:18:32.681 "subsystem": "iobuf", 00:18:32.681 "config": [ 00:18:32.681 { 00:18:32.681 "method": "iobuf_set_options", 00:18:32.681 "params": { 00:18:32.681 "small_pool_count": 8192, 00:18:32.681 "large_pool_count": 1024, 00:18:32.681 "small_bufsize": 8192, 00:18:32.681 "large_bufsize": 135168, 00:18:32.681 "enable_numa": false 00:18:32.681 } 00:18:32.681 } 00:18:32.681 ] 00:18:32.681 }, 00:18:32.681 { 00:18:32.681 "subsystem": "sock", 00:18:32.681 "config": [ 00:18:32.681 { 00:18:32.681 "method": "sock_set_default_impl", 00:18:32.681 "params": { 00:18:32.681 "impl_name": "posix" 00:18:32.681 } 00:18:32.681 }, 00:18:32.681 { 00:18:32.681 "method": "sock_impl_set_options", 00:18:32.681 "params": { 00:18:32.681 "impl_name": "ssl", 00:18:32.681 "recv_buf_size": 4096, 00:18:32.681 "send_buf_size": 4096, 00:18:32.681 "enable_recv_pipe": true, 00:18:32.681 "enable_quickack": false, 00:18:32.681 "enable_placement_id": 0, 00:18:32.681 "enable_zerocopy_send_server": true, 00:18:32.682 "enable_zerocopy_send_client": false, 00:18:32.682 "zerocopy_threshold": 0, 00:18:32.682 "tls_version": 0, 00:18:32.682 "enable_ktls": false 00:18:32.682 } 00:18:32.682 }, 00:18:32.682 { 00:18:32.682 "method": "sock_impl_set_options", 00:18:32.682 "params": { 00:18:32.682 "impl_name": "posix", 00:18:32.682 "recv_buf_size": 2097152, 00:18:32.682 "send_buf_size": 2097152, 00:18:32.682 "enable_recv_pipe": true, 00:18:32.682 "enable_quickack": false, 00:18:32.682 "enable_placement_id": 0, 00:18:32.682 "enable_zerocopy_send_server": true, 00:18:32.682 "enable_zerocopy_send_client": false, 00:18:32.682 "zerocopy_threshold": 0, 00:18:32.682 "tls_version": 0, 00:18:32.682 "enable_ktls": false 00:18:32.682 } 00:18:32.682 } 00:18:32.682 ] 00:18:32.682 }, 00:18:32.682 { 00:18:32.682 "subsystem": "vmd", 00:18:32.682 "config": [] 00:18:32.682 }, 00:18:32.682 { 00:18:32.682 "subsystem": "accel", 00:18:32.682 "config": [ 00:18:32.682 { 00:18:32.682 "method": "accel_set_options", 00:18:32.682 "params": { 00:18:32.682 "small_cache_size": 128, 00:18:32.682 "large_cache_size": 16, 00:18:32.682 "task_count": 2048, 00:18:32.682 "sequence_count": 2048, 00:18:32.682 "buf_count": 2048 00:18:32.682 } 00:18:32.682 } 00:18:32.682 ] 00:18:32.682 }, 00:18:32.682 { 00:18:32.682 "subsystem": "bdev", 00:18:32.682 "config": [ 00:18:32.682 { 00:18:32.682 "method": "bdev_set_options", 00:18:32.682 "params": { 00:18:32.682 "bdev_io_pool_size": 65535, 00:18:32.682 "bdev_io_cache_size": 256, 00:18:32.682 "bdev_auto_examine": true, 00:18:32.682 "iobuf_small_cache_size": 128, 00:18:32.682 "iobuf_large_cache_size": 16 00:18:32.682 } 00:18:32.682 }, 00:18:32.682 { 00:18:32.682 "method": "bdev_raid_set_options", 00:18:32.682 "params": { 00:18:32.682 "process_window_size_kb": 1024, 00:18:32.682 "process_max_bandwidth_mb_sec": 0 00:18:32.682 } 00:18:32.682 }, 00:18:32.682 { 00:18:32.682 "method": "bdev_iscsi_set_options", 00:18:32.682 "params": { 00:18:32.682 "timeout_sec": 30 00:18:32.682 } 00:18:32.682 }, 00:18:32.682 { 00:18:32.682 "method": "bdev_nvme_set_options", 00:18:32.682 "params": { 00:18:32.682 "action_on_timeout": "none", 00:18:32.682 "timeout_us": 0, 00:18:32.682 "timeout_admin_us": 0, 00:18:32.682 "keep_alive_timeout_ms": 10000, 00:18:32.682 "arbitration_burst": 0, 00:18:32.682 "low_priority_weight": 0, 00:18:32.682 "medium_priority_weight": 0, 00:18:32.682 "high_priority_weight": 0, 00:18:32.682 "nvme_adminq_poll_period_us": 10000, 00:18:32.682 "nvme_ioq_poll_period_us": 0, 00:18:32.682 "io_queue_requests": 512, 00:18:32.682 "delay_cmd_submit": true, 00:18:32.682 "transport_retry_count": 4, 00:18:32.682 "bdev_retry_count": 3, 00:18:32.682 "transport_ack_timeout": 0, 00:18:32.682 "ctrlr_loss_timeout_sec": 0, 00:18:32.682 "reconnect_delay_sec": 0, 00:18:32.682 "fast_io_fail_timeout_sec": 0, 00:18:32.682 "disable_auto_failback": false, 00:18:32.682 "generate_uuids": false, 00:18:32.682 "transport_tos": 0, 00:18:32.682 "nvme_error_stat": false, 00:18:32.682 "rdma_srq_size": 0, 00:18:32.682 "io_path_stat": false, 00:18:32.682 "allow_accel_sequence": false, 00:18:32.682 "rdma_max_cq_size": 0, 00:18:32.682 "rdma_cm_event_timeout_ms": 0, 00:18:32.682 "dhchap_digests": [ 00:18:32.682 "sha256", 00:18:32.682 "sha384", 00:18:32.682 "sha512" 00:18:32.682 ], 00:18:32.682 "dhchap_dhgroups": [ 00:18:32.682 "null", 00:18:32.682 "ffdhe2048", 00:18:32.682 "ffdhe3072", 00:18:32.682 "ffdhe4096", 00:18:32.682 "ffdhe6144", 00:18:32.682 "ffdhe8192" 00:18:32.682 ] 00:18:32.682 } 00:18:32.682 }, 00:18:32.682 { 00:18:32.682 "method": "bdev_nvme_attach_controller", 00:18:32.682 "params": { 00:18:32.682 "name": "TLSTEST", 00:18:32.682 "trtype": "TCP", 00:18:32.682 "adrfam": "IPv4", 00:18:32.682 "traddr": "10.0.0.2", 00:18:32.682 "trsvcid": "4420", 00:18:32.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.682 "prchk_reftag": false, 00:18:32.682 "prchk_guard": false, 00:18:32.682 "ctrlr_loss_timeout_sec": 0, 00:18:32.682 "reconnect_delay_sec": 0, 00:18:32.682 "fast_io_fail_timeout_sec": 0, 00:18:32.682 "psk": "key0", 00:18:32.682 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:32.682 "hdgst": false, 00:18:32.682 "ddgst": false, 00:18:32.682 "multipath": "multipath" 00:18:32.682 } 00:18:32.682 }, 00:18:32.682 { 00:18:32.682 "method": "bdev_nvme_set_hotplug", 00:18:32.682 "params": { 00:18:32.682 "period_us": 100000, 00:18:32.682 "enable": false 00:18:32.682 } 00:18:32.682 }, 00:18:32.682 { 00:18:32.682 "method": "bdev_wait_for_examine" 00:18:32.682 } 00:18:32.682 ] 00:18:32.682 }, 00:18:32.682 { 00:18:32.682 "subsystem": "nbd", 00:18:32.682 "config": [] 00:18:32.682 } 00:18:32.682 ] 00:18:32.682 }' 00:18:32.682 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1483105 00:18:32.682 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1483105 ']' 00:18:32.682 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1483105 00:18:32.682 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:32.682 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.682 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1483105 00:18:32.682 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:32.682 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:32.682 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1483105' 00:18:32.682 killing process with pid 1483105 00:18:32.682 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1483105 00:18:32.682 Received shutdown signal, test time was about 10.000000 seconds 00:18:32.682 00:18:32.682 Latency(us) 00:18:32.682 [2024-11-17T13:28:21.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.682 [2024-11-17T13:28:21.907Z] =================================================================================================================== 00:18:32.682 [2024-11-17T13:28:21.907Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:32.682 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1483105 00:18:32.942 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1482716 00:18:32.942 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1482716 ']' 00:18:32.942 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1482716 00:18:32.942 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:32.942 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.942 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1482716 00:18:32.942 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:32.942 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:32.942 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1482716' 00:18:32.942 killing process with pid 1482716 00:18:32.942 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1482716 00:18:32.942 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1482716 00:18:33.202 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:33.202 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:33.202 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:33.202 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:33.202 "subsystems": [ 00:18:33.202 { 00:18:33.202 "subsystem": "keyring", 00:18:33.202 "config": [ 00:18:33.202 { 00:18:33.202 "method": "keyring_file_add_key", 00:18:33.202 "params": { 00:18:33.202 "name": "key0", 00:18:33.202 "path": "/tmp/tmp.V4UOsF7x0p" 00:18:33.202 } 00:18:33.202 } 00:18:33.202 ] 00:18:33.202 }, 00:18:33.202 { 00:18:33.202 "subsystem": "iobuf", 00:18:33.202 "config": [ 00:18:33.202 { 00:18:33.202 "method": "iobuf_set_options", 00:18:33.202 "params": { 00:18:33.202 "small_pool_count": 8192, 00:18:33.202 "large_pool_count": 1024, 00:18:33.202 "small_bufsize": 8192, 00:18:33.202 "large_bufsize": 135168, 00:18:33.202 "enable_numa": false 00:18:33.202 } 00:18:33.202 } 00:18:33.202 ] 00:18:33.202 }, 00:18:33.202 { 00:18:33.202 "subsystem": "sock", 00:18:33.202 "config": [ 00:18:33.202 { 00:18:33.202 "method": "sock_set_default_impl", 00:18:33.202 "params": { 00:18:33.202 "impl_name": "posix" 00:18:33.202 } 00:18:33.202 }, 00:18:33.202 { 00:18:33.202 "method": "sock_impl_set_options", 00:18:33.202 "params": { 00:18:33.202 "impl_name": "ssl", 00:18:33.202 "recv_buf_size": 4096, 00:18:33.202 "send_buf_size": 4096, 00:18:33.202 "enable_recv_pipe": true, 00:18:33.202 "enable_quickack": false, 00:18:33.202 "enable_placement_id": 0, 00:18:33.202 "enable_zerocopy_send_server": true, 00:18:33.202 "enable_zerocopy_send_client": false, 00:18:33.202 "zerocopy_threshold": 0, 00:18:33.202 "tls_version": 0, 00:18:33.202 "enable_ktls": false 00:18:33.202 } 00:18:33.202 }, 00:18:33.202 { 00:18:33.202 "method": "sock_impl_set_options", 00:18:33.202 "params": { 00:18:33.202 "impl_name": "posix", 00:18:33.202 "recv_buf_size": 2097152, 00:18:33.202 "send_buf_size": 2097152, 00:18:33.202 "enable_recv_pipe": true, 00:18:33.202 "enable_quickack": false, 00:18:33.202 "enable_placement_id": 0, 00:18:33.202 "enable_zerocopy_send_server": true, 00:18:33.202 "enable_zerocopy_send_client": false, 00:18:33.202 "zerocopy_threshold": 0, 00:18:33.202 "tls_version": 0, 00:18:33.202 "enable_ktls": false 00:18:33.202 } 00:18:33.202 } 00:18:33.202 ] 00:18:33.202 }, 00:18:33.202 { 00:18:33.202 "subsystem": "vmd", 00:18:33.202 "config": [] 00:18:33.202 }, 00:18:33.202 { 00:18:33.202 "subsystem": "accel", 00:18:33.202 "config": [ 00:18:33.202 { 00:18:33.202 "method": "accel_set_options", 00:18:33.202 "params": { 00:18:33.202 "small_cache_size": 128, 00:18:33.202 "large_cache_size": 16, 00:18:33.202 "task_count": 2048, 00:18:33.202 "sequence_count": 2048, 00:18:33.202 "buf_count": 2048 00:18:33.202 } 00:18:33.202 } 00:18:33.202 ] 00:18:33.202 }, 00:18:33.202 { 00:18:33.202 "subsystem": "bdev", 00:18:33.202 "config": [ 00:18:33.202 { 00:18:33.202 "method": "bdev_set_options", 00:18:33.202 "params": { 00:18:33.202 "bdev_io_pool_size": 65535, 00:18:33.202 "bdev_io_cache_size": 256, 00:18:33.202 "bdev_auto_examine": true, 00:18:33.202 "iobuf_small_cache_size": 128, 00:18:33.202 "iobuf_large_cache_size": 16 00:18:33.202 } 00:18:33.202 }, 00:18:33.202 { 00:18:33.202 "method": "bdev_raid_set_options", 00:18:33.202 "params": { 00:18:33.202 "process_window_size_kb": 1024, 00:18:33.202 "process_max_bandwidth_mb_sec": 0 00:18:33.202 } 00:18:33.202 }, 00:18:33.202 { 00:18:33.202 "method": "bdev_iscsi_set_options", 00:18:33.202 "params": { 00:18:33.202 "timeout_sec": 30 00:18:33.202 } 00:18:33.202 }, 00:18:33.202 { 00:18:33.202 "method": "bdev_nvme_set_options", 00:18:33.202 "params": { 00:18:33.202 "action_on_timeout": "none", 00:18:33.202 "timeout_us": 0, 00:18:33.202 "timeout_admin_us": 0, 00:18:33.202 "keep_alive_timeout_ms": 10000, 00:18:33.202 "arbitration_burst": 0, 00:18:33.202 "low_priority_weight": 0, 00:18:33.202 "medium_priority_weight": 0, 00:18:33.202 "high_priority_weight": 0, 00:18:33.202 "nvme_adminq_poll_period_us": 10000, 00:18:33.202 "nvme_ioq_poll_period_us": 0, 00:18:33.202 "io_queue_requests": 0, 00:18:33.202 "delay_cmd_submit": true, 00:18:33.202 "transport_retry_count": 4, 00:18:33.202 "bdev_retry_count": 3, 00:18:33.202 "transport_ack_timeout": 0, 00:18:33.202 "ctrlr_loss_timeout_sec": 0, 00:18:33.202 "reconnect_delay_sec": 0, 00:18:33.202 "fast_io_fail_timeout_sec": 0, 00:18:33.202 "disable_auto_failback": false, 00:18:33.202 "generate_uuids": false, 00:18:33.202 "transport_tos": 0, 00:18:33.202 "nvme_error_stat": false, 00:18:33.202 "rdma_srq_size": 0, 00:18:33.202 "io_path_stat": false, 00:18:33.202 "allow_accel_sequence": false, 00:18:33.202 "rdma_max_cq_size": 0, 00:18:33.202 "rdma_cm_event_timeout_ms": 0, 00:18:33.202 "dhchap_digests": [ 00:18:33.202 "sha256", 00:18:33.202 "sha384", 00:18:33.202 "sha512" 00:18:33.202 ], 00:18:33.202 "dhchap_dhgroups": [ 00:18:33.202 "null", 00:18:33.202 "ffdhe2048", 00:18:33.202 "ffdhe3072", 00:18:33.202 "ffdhe4096", 00:18:33.202 "ffdhe6144", 00:18:33.202 "ffdhe8192" 00:18:33.202 ] 00:18:33.202 } 00:18:33.202 }, 00:18:33.202 { 00:18:33.202 "method": "bdev_nvme_set_hotplug", 00:18:33.202 "params": { 00:18:33.202 "period_us": 100000, 00:18:33.202 "enable": false 00:18:33.202 } 00:18:33.202 }, 00:18:33.202 { 00:18:33.202 "method": "bdev_malloc_create", 00:18:33.202 "params": { 00:18:33.202 "name": "malloc0", 00:18:33.202 "num_blocks": 8192, 00:18:33.202 "block_size": 4096, 00:18:33.202 "physical_block_size": 4096, 00:18:33.202 "uuid": "c91c5340-14ce-40b8-af54-b3473108e9b2", 00:18:33.202 "optimal_io_boundary": 0, 00:18:33.202 "md_size": 0, 00:18:33.202 "dif_type": 0, 00:18:33.202 "dif_is_head_of_md": false, 00:18:33.202 "dif_pi_format": 0 00:18:33.202 } 00:18:33.202 }, 00:18:33.202 { 00:18:33.202 "method": "bdev_wait_for_examine" 00:18:33.202 } 00:18:33.202 ] 00:18:33.202 }, 00:18:33.202 { 00:18:33.202 "subsystem": "nbd", 00:18:33.202 "config": [] 00:18:33.202 }, 00:18:33.202 { 00:18:33.202 "subsystem": "scheduler", 00:18:33.202 "config": [ 00:18:33.202 { 00:18:33.202 "method": "framework_set_scheduler", 00:18:33.202 "params": { 00:18:33.202 "name": "static" 00:18:33.202 } 00:18:33.202 } 00:18:33.202 ] 00:18:33.202 }, 00:18:33.202 { 00:18:33.202 "subsystem": "nvmf", 00:18:33.202 "config": [ 00:18:33.202 { 00:18:33.202 "method": "nvmf_set_config", 00:18:33.202 "params": { 00:18:33.202 "discovery_filter": "match_any", 00:18:33.202 "admin_cmd_passthru": { 00:18:33.202 "identify_ctrlr": false 00:18:33.202 }, 00:18:33.202 "dhchap_digests": [ 00:18:33.202 "sha256", 00:18:33.202 "sha384", 00:18:33.202 "sha512" 00:18:33.202 ], 00:18:33.202 "dhchap_dhgroups": [ 00:18:33.202 "null", 00:18:33.202 "ffdhe2048", 00:18:33.202 "ffdhe3072", 00:18:33.202 "ffdhe4096", 00:18:33.203 "ffdhe6144", 00:18:33.203 "ffdhe8192" 00:18:33.203 ] 00:18:33.203 } 00:18:33.203 }, 00:18:33.203 { 00:18:33.203 "method": "nvmf_set_max_subsystems", 00:18:33.203 "params": { 00:18:33.203 "max_subsystems": 1024 00:18:33.203 } 00:18:33.203 }, 00:18:33.203 { 00:18:33.203 "method": "nvmf_set_crdt", 00:18:33.203 "params": { 00:18:33.203 "crdt1": 0, 00:18:33.203 "crdt2": 0, 00:18:33.203 "crdt3": 0 00:18:33.203 } 00:18:33.203 }, 00:18:33.203 { 00:18:33.203 "method": "nvmf_create_transport", 00:18:33.203 "params": { 00:18:33.203 "trtype": "TCP", 00:18:33.203 "max_queue_depth": 128, 00:18:33.203 "max_io_qpairs_per_ctrlr": 127, 00:18:33.203 "in_capsule_data_size": 4096, 00:18:33.203 "max_io_size": 131072, 00:18:33.203 "io_unit_size": 131072, 00:18:33.203 "max_aq_depth": 128, 00:18:33.203 "num_shared_buffers": 511, 00:18:33.203 "buf_cache_size": 4294967295, 00:18:33.203 "dif_insert_or_strip": false, 00:18:33.203 "zcopy": false, 00:18:33.203 "c2h_success": false, 00:18:33.203 "sock_priority": 0, 00:18:33.203 "abort_timeout_sec": 1, 00:18:33.203 "ack_timeout": 0, 00:18:33.203 "data_wr_pool_size": 0 00:18:33.203 } 00:18:33.203 }, 00:18:33.203 { 00:18:33.203 "method": "nvmf_create_subsystem", 00:18:33.203 "params": { 00:18:33.203 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.203 "allow_any_host": false, 00:18:33.203 "serial_number": "SPDK00000000000001", 00:18:33.203 "model_number": "SPDK bdev Controller", 00:18:33.203 "max_namespaces": 10, 00:18:33.203 "min_cntlid": 1, 00:18:33.203 "max_cntlid": 65519, 00:18:33.203 "ana_reporting": false 00:18:33.203 } 00:18:33.203 }, 00:18:33.203 { 00:18:33.203 "method": "nvmf_subsystem_add_host", 00:18:33.203 "params": { 00:18:33.203 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.203 "host": "nqn.2016-06.io.spdk:host1", 00:18:33.203 "psk": "key0" 00:18:33.203 } 00:18:33.203 }, 00:18:33.203 { 00:18:33.203 "method": "nvmf_subsystem_add_ns", 00:18:33.203 "params": { 00:18:33.203 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.203 "namespace": { 00:18:33.203 "nsid": 1, 00:18:33.203 "bdev_name": "malloc0", 00:18:33.203 "nguid": "C91C534014CE40B8AF54B3473108E9B2", 00:18:33.203 "uuid": "c91c5340-14ce-40b8-af54-b3473108e9b2", 00:18:33.203 "no_auto_visible": false 00:18:33.203 } 00:18:33.203 } 00:18:33.203 }, 00:18:33.203 { 00:18:33.203 "method": "nvmf_subsystem_add_listener", 00:18:33.203 "params": { 00:18:33.203 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.203 "listen_address": { 00:18:33.203 "trtype": "TCP", 00:18:33.203 "adrfam": "IPv4", 00:18:33.203 "traddr": "10.0.0.2", 00:18:33.203 "trsvcid": "4420" 00:18:33.203 }, 00:18:33.203 "secure_channel": true 00:18:33.203 } 00:18:33.203 } 00:18:33.203 ] 00:18:33.203 } 00:18:33.203 ] 00:18:33.203 }' 00:18:33.203 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.203 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1483422 00:18:33.203 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:33.203 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1483422 00:18:33.203 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1483422 ']' 00:18:33.203 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.203 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.203 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.203 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.203 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.203 [2024-11-17 14:28:22.292461] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:33.203 [2024-11-17 14:28:22.292507] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.203 [2024-11-17 14:28:22.369790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.203 [2024-11-17 14:28:22.410217] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.203 [2024-11-17 14:28:22.410252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.203 [2024-11-17 14:28:22.410259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.203 [2024-11-17 14:28:22.410265] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.203 [2024-11-17 14:28:22.410270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.203 [2024-11-17 14:28:22.410880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.462 [2024-11-17 14:28:22.623478] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.462 [2024-11-17 14:28:22.655499] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:33.462 [2024-11-17 14:28:22.655702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.031 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.031 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:34.032 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:34.032 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:34.032 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.032 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.032 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1483484 00:18:34.032 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1483484 /var/tmp/bdevperf.sock 00:18:34.032 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1483484 ']' 00:18:34.032 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.032 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:34.032 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.032 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.032 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:34.032 "subsystems": [ 00:18:34.032 { 00:18:34.032 "subsystem": "keyring", 00:18:34.032 "config": [ 00:18:34.032 { 00:18:34.032 "method": "keyring_file_add_key", 00:18:34.032 "params": { 00:18:34.032 "name": "key0", 00:18:34.032 "path": "/tmp/tmp.V4UOsF7x0p" 00:18:34.032 } 00:18:34.032 } 00:18:34.032 ] 00:18:34.032 }, 00:18:34.032 { 00:18:34.032 "subsystem": "iobuf", 00:18:34.032 "config": [ 00:18:34.032 { 00:18:34.032 "method": "iobuf_set_options", 00:18:34.032 "params": { 00:18:34.032 "small_pool_count": 8192, 00:18:34.032 "large_pool_count": 1024, 00:18:34.032 "small_bufsize": 8192, 00:18:34.032 "large_bufsize": 135168, 00:18:34.032 "enable_numa": false 00:18:34.032 } 00:18:34.032 } 00:18:34.032 ] 00:18:34.032 }, 00:18:34.032 { 00:18:34.032 "subsystem": "sock", 00:18:34.032 "config": [ 00:18:34.032 { 00:18:34.032 "method": "sock_set_default_impl", 00:18:34.032 "params": { 00:18:34.032 "impl_name": "posix" 00:18:34.032 } 00:18:34.032 }, 00:18:34.032 { 00:18:34.032 "method": "sock_impl_set_options", 00:18:34.032 "params": { 00:18:34.032 "impl_name": "ssl", 00:18:34.032 "recv_buf_size": 4096, 00:18:34.032 "send_buf_size": 4096, 00:18:34.032 "enable_recv_pipe": true, 00:18:34.032 "enable_quickack": false, 00:18:34.032 "enable_placement_id": 0, 00:18:34.032 "enable_zerocopy_send_server": true, 00:18:34.032 "enable_zerocopy_send_client": false, 00:18:34.032 "zerocopy_threshold": 0, 00:18:34.032 "tls_version": 0, 00:18:34.032 "enable_ktls": false 00:18:34.032 } 00:18:34.032 }, 00:18:34.032 { 00:18:34.032 "method": "sock_impl_set_options", 00:18:34.032 "params": { 00:18:34.032 "impl_name": "posix", 00:18:34.032 "recv_buf_size": 2097152, 00:18:34.032 "send_buf_size": 2097152, 00:18:34.032 "enable_recv_pipe": true, 00:18:34.032 "enable_quickack": false, 00:18:34.032 "enable_placement_id": 0, 00:18:34.032 "enable_zerocopy_send_server": true, 00:18:34.032 "enable_zerocopy_send_client": false, 00:18:34.032 "zerocopy_threshold": 0, 00:18:34.032 "tls_version": 0, 00:18:34.032 "enable_ktls": false 00:18:34.032 } 00:18:34.032 } 00:18:34.032 ] 00:18:34.032 }, 00:18:34.032 { 00:18:34.032 "subsystem": "vmd", 00:18:34.032 "config": [] 00:18:34.032 }, 00:18:34.032 { 00:18:34.032 "subsystem": "accel", 00:18:34.032 "config": [ 00:18:34.032 { 00:18:34.032 "method": "accel_set_options", 00:18:34.032 "params": { 00:18:34.032 "small_cache_size": 128, 00:18:34.032 "large_cache_size": 16, 00:18:34.032 "task_count": 2048, 00:18:34.032 "sequence_count": 2048, 00:18:34.032 "buf_count": 2048 00:18:34.032 } 00:18:34.032 } 00:18:34.032 ] 00:18:34.032 }, 00:18:34.032 { 00:18:34.032 "subsystem": "bdev", 00:18:34.032 "config": [ 00:18:34.032 { 00:18:34.032 "method": "bdev_set_options", 00:18:34.032 "params": { 00:18:34.032 "bdev_io_pool_size": 65535, 00:18:34.032 "bdev_io_cache_size": 256, 00:18:34.032 "bdev_auto_examine": true, 00:18:34.032 "iobuf_small_cache_size": 128, 00:18:34.032 "iobuf_large_cache_size": 16 00:18:34.032 } 00:18:34.032 }, 00:18:34.032 { 00:18:34.032 "method": "bdev_raid_set_options", 00:18:34.032 "params": { 00:18:34.032 "process_window_size_kb": 1024, 00:18:34.032 "process_max_bandwidth_mb_sec": 0 00:18:34.032 } 00:18:34.032 }, 00:18:34.032 { 00:18:34.032 "method": "bdev_iscsi_set_options", 00:18:34.032 "params": { 00:18:34.032 "timeout_sec": 30 00:18:34.032 } 00:18:34.032 }, 00:18:34.032 { 00:18:34.032 "method": "bdev_nvme_set_options", 00:18:34.032 "params": { 00:18:34.032 "action_on_timeout": "none", 00:18:34.032 "timeout_us": 0, 00:18:34.032 "timeout_admin_us": 0, 00:18:34.032 "keep_alive_timeout_ms": 10000, 00:18:34.032 "arbitration_burst": 0, 00:18:34.032 "low_priority_weight": 0, 00:18:34.032 "medium_priority_weight": 0, 00:18:34.032 "high_priority_weight": 0, 00:18:34.032 "nvme_adminq_poll_period_us": 10000, 00:18:34.032 "nvme_ioq_poll_period_us": 0, 00:18:34.032 "io_queue_requests": 512, 00:18:34.032 "delay_cmd_submit": true, 00:18:34.032 "transport_retry_count": 4, 00:18:34.032 "bdev_retry_count": 3, 00:18:34.032 "transport_ack_timeout": 0, 00:18:34.032 "ctrlr_loss_timeout_sec": 0, 00:18:34.032 "reconnect_delay_sec": 0, 00:18:34.032 "fast_io_fail_timeout_sec": 0, 00:18:34.032 "disable_auto_failback": false, 00:18:34.032 "generate_uuids": false, 00:18:34.032 "transport_tos": 0, 00:18:34.032 "nvme_error_stat": false, 00:18:34.032 "rdma_srq_size": 0, 00:18:34.032 "io_path_stat": false, 00:18:34.032 "allow_accel_sequence": false, 00:18:34.032 "rdma_max_cq_size": 0, 00:18:34.032 "rdma_cm_event_timeout_ms": 0, 00:18:34.032 "dhchap_digests": [ 00:18:34.032 "sha256", 00:18:34.032 "sha384", 00:18:34.032 "sha512" 00:18:34.032 ], 00:18:34.032 "dhchap_dhgroups": [ 00:18:34.032 "null", 00:18:34.032 "ffdhe2048", 00:18:34.032 "ffdhe3072", 00:18:34.032 "ffdhe4096", 00:18:34.032 "ffdhe6144", 00:18:34.032 "ffdhe8192" 00:18:34.032 ] 00:18:34.032 } 00:18:34.032 }, 00:18:34.032 { 00:18:34.032 "method": "bdev_nvme_attach_controller", 00:18:34.032 "params": { 00:18:34.032 "name": "TLSTEST", 00:18:34.032 "trtype": "TCP", 00:18:34.032 "adrfam": "IPv4", 00:18:34.032 "traddr": "10.0.0.2", 00:18:34.032 "trsvcid": "4420", 00:18:34.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.032 "prchk_reftag": false, 00:18:34.032 "prchk_guard": false, 00:18:34.032 "ctrlr_loss_timeout_sec": 0, 00:18:34.032 "reconnect_delay_sec": 0, 00:18:34.032 "fast_io_fail_timeout_sec": 0, 00:18:34.032 "psk": "key0", 00:18:34.032 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.032 "hdgst": false, 00:18:34.032 "ddgst": false, 00:18:34.032 "multipath": "multipath" 00:18:34.032 } 00:18:34.032 }, 00:18:34.032 { 00:18:34.032 "method": "bdev_nvme_set_hotplug", 00:18:34.032 "params": { 00:18:34.032 "period_us": 100000, 00:18:34.032 "enable": false 00:18:34.032 } 00:18:34.032 }, 00:18:34.032 { 00:18:34.032 "method": "bdev_wait_for_examine" 00:18:34.032 } 00:18:34.032 ] 00:18:34.032 }, 00:18:34.032 { 00:18:34.032 "subsystem": "nbd", 00:18:34.032 "config": [] 00:18:34.032 } 00:18:34.032 ] 00:18:34.032 }' 00:18:34.032 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.032 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.032 [2024-11-17 14:28:23.211124] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:34.033 [2024-11-17 14:28:23.211171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1483484 ] 00:18:34.292 [2024-11-17 14:28:23.288060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.292 [2024-11-17 14:28:23.329780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.292 [2024-11-17 14:28:23.481000] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:34.860 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.860 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:34.860 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:35.119 Running I/O for 10 seconds... 00:18:37.003 5263.00 IOPS, 20.56 MiB/s [2024-11-17T13:28:27.255Z] 5367.00 IOPS, 20.96 MiB/s [2024-11-17T13:28:28.192Z] 5242.67 IOPS, 20.48 MiB/s [2024-11-17T13:28:29.570Z] 5192.00 IOPS, 20.28 MiB/s [2024-11-17T13:28:30.506Z] 5165.20 IOPS, 20.18 MiB/s [2024-11-17T13:28:31.442Z] 5144.00 IOPS, 20.09 MiB/s [2024-11-17T13:28:32.378Z] 5092.29 IOPS, 19.89 MiB/s [2024-11-17T13:28:33.314Z] 5078.75 IOPS, 19.84 MiB/s [2024-11-17T13:28:34.250Z] 5069.89 IOPS, 19.80 MiB/s [2024-11-17T13:28:34.250Z] 5072.80 IOPS, 19.82 MiB/s 00:18:45.025 Latency(us) 00:18:45.025 [2024-11-17T13:28:34.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.025 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:45.025 Verification LBA range: start 0x0 length 0x2000 00:18:45.025 TLSTESTn1 : 10.02 5077.07 19.83 0.00 0.00 25174.33 6439.62 50377.24 00:18:45.025 [2024-11-17T13:28:34.250Z] =================================================================================================================== 00:18:45.025 [2024-11-17T13:28:34.250Z] Total : 5077.07 19.83 0.00 0.00 25174.33 6439.62 50377.24 00:18:45.025 { 00:18:45.025 "results": [ 00:18:45.025 { 00:18:45.025 "job": "TLSTESTn1", 00:18:45.025 "core_mask": "0x4", 00:18:45.025 "workload": "verify", 00:18:45.025 "status": "finished", 00:18:45.025 "verify_range": { 00:18:45.025 "start": 0, 00:18:45.025 "length": 8192 00:18:45.025 }, 00:18:45.025 "queue_depth": 128, 00:18:45.025 "io_size": 4096, 00:18:45.025 "runtime": 10.016606, 00:18:45.025 "iops": 5077.069019186739, 00:18:45.025 "mibps": 19.8323008561982, 00:18:45.025 "io_failed": 0, 00:18:45.025 "io_timeout": 0, 00:18:45.025 "avg_latency_us": 25174.33463400204, 00:18:45.025 "min_latency_us": 6439.624347826087, 00:18:45.025 "max_latency_us": 50377.23826086956 00:18:45.025 } 00:18:45.025 ], 00:18:45.025 "core_count": 1 00:18:45.025 } 00:18:45.025 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:45.025 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1483484 00:18:45.025 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1483484 ']' 00:18:45.025 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1483484 00:18:45.025 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:45.025 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.025 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1483484 00:18:45.283 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:45.283 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:45.283 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1483484' 00:18:45.283 killing process with pid 1483484 00:18:45.283 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1483484 00:18:45.283 Received shutdown signal, test time was about 10.000000 seconds 00:18:45.283 00:18:45.284 Latency(us) 00:18:45.284 [2024-11-17T13:28:34.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.284 [2024-11-17T13:28:34.509Z] =================================================================================================================== 00:18:45.284 [2024-11-17T13:28:34.509Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:45.284 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1483484 00:18:45.284 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1483422 00:18:45.284 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1483422 ']' 00:18:45.284 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1483422 00:18:45.284 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:45.284 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.284 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1483422 00:18:45.284 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:45.284 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:45.284 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1483422' 00:18:45.284 killing process with pid 1483422 00:18:45.284 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1483422 00:18:45.284 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1483422 00:18:45.542 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:45.543 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:45.543 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.543 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.543 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:45.543 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1485367 00:18:45.543 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1485367 00:18:45.543 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1485367 ']' 00:18:45.543 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.543 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.543 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.543 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.543 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.543 [2024-11-17 14:28:34.691961] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:45.543 [2024-11-17 14:28:34.692008] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.801 [2024-11-17 14:28:34.771890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.801 [2024-11-17 14:28:34.812561] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.801 [2024-11-17 14:28:34.812597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.801 [2024-11-17 14:28:34.812604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.801 [2024-11-17 14:28:34.812611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.801 [2024-11-17 14:28:34.812616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.801 [2024-11-17 14:28:34.813164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.801 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.801 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:45.801 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:45.801 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.801 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.801 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.801 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.V4UOsF7x0p 00:18:45.801 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.V4UOsF7x0p 00:18:45.801 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:46.059 [2024-11-17 14:28:35.113410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.059 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:46.318 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:46.318 [2024-11-17 14:28:35.506422] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:46.318 [2024-11-17 14:28:35.506610] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.318 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:46.577 malloc0 00:18:46.577 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:46.836 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.V4UOsF7x0p 00:18:47.095 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:47.354 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:47.354 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1485776 00:18:47.354 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:47.354 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1485776 /var/tmp/bdevperf.sock 00:18:47.354 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1485776 ']' 00:18:47.354 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.354 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.354 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.354 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.354 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.354 [2024-11-17 14:28:36.348650] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:47.354 [2024-11-17 14:28:36.348700] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485776 ] 00:18:47.354 [2024-11-17 14:28:36.405405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.354 [2024-11-17 14:28:36.446201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.354 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.354 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:47.354 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.V4UOsF7x0p 00:18:47.614 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:47.872 [2024-11-17 14:28:36.905426] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:47.872 nvme0n1 00:18:47.872 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:48.131 Running I/O for 1 seconds... 00:18:49.067 5443.00 IOPS, 21.26 MiB/s 00:18:49.067 Latency(us) 00:18:49.067 [2024-11-17T13:28:38.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.067 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:49.067 Verification LBA range: start 0x0 length 0x2000 00:18:49.067 nvme0n1 : 1.02 5476.18 21.39 0.00 0.00 23194.52 5071.92 19831.76 00:18:49.067 [2024-11-17T13:28:38.292Z] =================================================================================================================== 00:18:49.067 [2024-11-17T13:28:38.292Z] Total : 5476.18 21.39 0.00 0.00 23194.52 5071.92 19831.76 00:18:49.067 { 00:18:49.067 "results": [ 00:18:49.067 { 00:18:49.067 "job": "nvme0n1", 00:18:49.067 "core_mask": "0x2", 00:18:49.067 "workload": "verify", 00:18:49.067 "status": "finished", 00:18:49.067 "verify_range": { 00:18:49.067 "start": 0, 00:18:49.067 "length": 8192 00:18:49.067 }, 00:18:49.067 "queue_depth": 128, 00:18:49.067 "io_size": 4096, 00:18:49.067 "runtime": 1.017315, 00:18:49.067 "iops": 5476.179944265051, 00:18:49.067 "mibps": 21.391327907285355, 00:18:49.067 "io_failed": 0, 00:18:49.067 "io_timeout": 0, 00:18:49.067 "avg_latency_us": 23194.520883144858, 00:18:49.067 "min_latency_us": 5071.91652173913, 00:18:49.067 "max_latency_us": 19831.76347826087 00:18:49.067 } 00:18:49.067 ], 00:18:49.067 "core_count": 1 00:18:49.067 } 00:18:49.067 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1485776 00:18:49.068 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1485776 ']' 00:18:49.068 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1485776 00:18:49.068 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.068 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.068 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1485776 00:18:49.068 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:49.068 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:49.068 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1485776' 00:18:49.068 killing process with pid 1485776 00:18:49.068 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1485776 00:18:49.068 Received shutdown signal, test time was about 1.000000 seconds 00:18:49.068 00:18:49.068 Latency(us) 00:18:49.068 [2024-11-17T13:28:38.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.068 [2024-11-17T13:28:38.293Z] =================================================================================================================== 00:18:49.068 [2024-11-17T13:28:38.293Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.068 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1485776 00:18:49.327 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1485367 00:18:49.327 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1485367 ']' 00:18:49.327 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1485367 00:18:49.327 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.327 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.327 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1485367 00:18:49.327 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:49.327 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:49.327 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1485367' 00:18:49.327 killing process with pid 1485367 00:18:49.327 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1485367 00:18:49.327 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1485367 00:18:49.586 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:49.586 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:49.586 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.586 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.586 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1486029 00:18:49.586 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:49.586 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1486029 00:18:49.586 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1486029 ']' 00:18:49.586 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.586 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.586 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.586 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.586 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.586 [2024-11-17 14:28:38.625801] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:49.586 [2024-11-17 14:28:38.625847] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.586 [2024-11-17 14:28:38.704764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.586 [2024-11-17 14:28:38.740124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.586 [2024-11-17 14:28:38.740158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.586 [2024-11-17 14:28:38.740166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.586 [2024-11-17 14:28:38.740172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.586 [2024-11-17 14:28:38.740178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.586 [2024-11-17 14:28:38.740754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.845 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.845 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:49.845 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:49.845 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.845 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.845 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.845 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:49.845 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.845 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.845 [2024-11-17 14:28:38.887924] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.845 malloc0 00:18:49.845 [2024-11-17 14:28:38.916202] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:49.845 [2024-11-17 14:28:38.916412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.845 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.845 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1486130 00:18:49.845 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:49.845 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1486130 /var/tmp/bdevperf.sock 00:18:49.845 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1486130 ']' 00:18:49.845 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.845 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.845 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.845 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.845 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.845 [2024-11-17 14:28:38.991065] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:49.845 [2024-11-17 14:28:38.991110] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486130 ] 00:18:49.845 [2024-11-17 14:28:39.065398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.105 [2024-11-17 14:28:39.107693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.105 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.105 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:50.105 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.V4UOsF7x0p 00:18:50.364 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:50.364 [2024-11-17 14:28:39.539983] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:50.623 nvme0n1 00:18:50.623 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:50.623 Running I/O for 1 seconds... 00:18:51.561 5304.00 IOPS, 20.72 MiB/s 00:18:51.561 Latency(us) 00:18:51.561 [2024-11-17T13:28:40.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.561 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:51.561 Verification LBA range: start 0x0 length 0x2000 00:18:51.561 nvme0n1 : 1.02 5352.21 20.91 0.00 0.00 23741.74 4872.46 24618.74 00:18:51.561 [2024-11-17T13:28:40.786Z] =================================================================================================================== 00:18:51.561 [2024-11-17T13:28:40.786Z] Total : 5352.21 20.91 0.00 0.00 23741.74 4872.46 24618.74 00:18:51.561 { 00:18:51.561 "results": [ 00:18:51.561 { 00:18:51.561 "job": "nvme0n1", 00:18:51.561 "core_mask": "0x2", 00:18:51.561 "workload": "verify", 00:18:51.561 "status": "finished", 00:18:51.561 "verify_range": { 00:18:51.561 "start": 0, 00:18:51.561 "length": 8192 00:18:51.561 }, 00:18:51.561 "queue_depth": 128, 00:18:51.561 "io_size": 4096, 00:18:51.561 "runtime": 1.015095, 00:18:51.561 "iops": 5352.208413990808, 00:18:51.561 "mibps": 20.907064117151595, 00:18:51.561 "io_failed": 0, 00:18:51.561 "io_timeout": 0, 00:18:51.561 "avg_latency_us": 23741.741441592843, 00:18:51.561 "min_latency_us": 4872.459130434782, 00:18:51.561 "max_latency_us": 24618.740869565216 00:18:51.561 } 00:18:51.561 ], 00:18:51.561 "core_count": 1 00:18:51.561 } 00:18:51.561 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:51.561 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.561 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.820 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.820 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:51.820 "subsystems": [ 00:18:51.820 { 00:18:51.820 "subsystem": "keyring", 00:18:51.820 "config": [ 00:18:51.820 { 00:18:51.820 "method": "keyring_file_add_key", 00:18:51.820 "params": { 00:18:51.820 "name": "key0", 00:18:51.820 "path": "/tmp/tmp.V4UOsF7x0p" 00:18:51.820 } 00:18:51.820 } 00:18:51.820 ] 00:18:51.820 }, 00:18:51.820 { 00:18:51.820 "subsystem": "iobuf", 00:18:51.820 "config": [ 00:18:51.820 { 00:18:51.820 "method": "iobuf_set_options", 00:18:51.820 "params": { 00:18:51.820 "small_pool_count": 8192, 00:18:51.820 "large_pool_count": 1024, 00:18:51.820 "small_bufsize": 8192, 00:18:51.820 "large_bufsize": 135168, 00:18:51.820 "enable_numa": false 00:18:51.820 } 00:18:51.820 } 00:18:51.820 ] 00:18:51.820 }, 00:18:51.820 { 00:18:51.820 "subsystem": "sock", 00:18:51.820 "config": [ 00:18:51.820 { 00:18:51.820 "method": "sock_set_default_impl", 00:18:51.820 "params": { 00:18:51.820 "impl_name": "posix" 00:18:51.820 } 00:18:51.820 }, 00:18:51.820 { 00:18:51.820 "method": "sock_impl_set_options", 00:18:51.820 "params": { 00:18:51.820 "impl_name": "ssl", 00:18:51.821 "recv_buf_size": 4096, 00:18:51.821 "send_buf_size": 4096, 00:18:51.821 "enable_recv_pipe": true, 00:18:51.821 "enable_quickack": false, 00:18:51.821 "enable_placement_id": 0, 00:18:51.821 "enable_zerocopy_send_server": true, 00:18:51.821 "enable_zerocopy_send_client": false, 00:18:51.821 "zerocopy_threshold": 0, 00:18:51.821 "tls_version": 0, 00:18:51.821 "enable_ktls": false 00:18:51.821 } 00:18:51.821 }, 00:18:51.821 { 00:18:51.821 "method": "sock_impl_set_options", 00:18:51.821 "params": { 00:18:51.821 "impl_name": "posix", 00:18:51.821 "recv_buf_size": 2097152, 00:18:51.821 "send_buf_size": 2097152, 00:18:51.821 "enable_recv_pipe": true, 00:18:51.821 "enable_quickack": false, 00:18:51.821 "enable_placement_id": 0, 00:18:51.821 "enable_zerocopy_send_server": true, 00:18:51.821 "enable_zerocopy_send_client": false, 00:18:51.821 "zerocopy_threshold": 0, 00:18:51.821 "tls_version": 0, 00:18:51.821 "enable_ktls": false 00:18:51.821 } 00:18:51.821 } 00:18:51.821 ] 00:18:51.821 }, 00:18:51.821 { 00:18:51.821 "subsystem": "vmd", 00:18:51.821 "config": [] 00:18:51.821 }, 00:18:51.821 { 00:18:51.821 "subsystem": "accel", 00:18:51.821 "config": [ 00:18:51.821 { 00:18:51.821 "method": "accel_set_options", 00:18:51.821 "params": { 00:18:51.821 "small_cache_size": 128, 00:18:51.821 "large_cache_size": 16, 00:18:51.821 "task_count": 2048, 00:18:51.821 "sequence_count": 2048, 00:18:51.821 "buf_count": 2048 00:18:51.821 } 00:18:51.821 } 00:18:51.821 ] 00:18:51.821 }, 00:18:51.821 { 00:18:51.821 "subsystem": "bdev", 00:18:51.821 "config": [ 00:18:51.821 { 00:18:51.821 "method": "bdev_set_options", 00:18:51.821 "params": { 00:18:51.821 "bdev_io_pool_size": 65535, 00:18:51.821 "bdev_io_cache_size": 256, 00:18:51.821 "bdev_auto_examine": true, 00:18:51.821 "iobuf_small_cache_size": 128, 00:18:51.821 "iobuf_large_cache_size": 16 00:18:51.821 } 00:18:51.821 }, 00:18:51.821 { 00:18:51.821 "method": "bdev_raid_set_options", 00:18:51.821 "params": { 00:18:51.821 "process_window_size_kb": 1024, 00:18:51.821 "process_max_bandwidth_mb_sec": 0 00:18:51.821 } 00:18:51.821 }, 00:18:51.821 { 00:18:51.821 "method": "bdev_iscsi_set_options", 00:18:51.821 "params": { 00:18:51.821 "timeout_sec": 30 00:18:51.821 } 00:18:51.821 }, 00:18:51.821 { 00:18:51.821 "method": "bdev_nvme_set_options", 00:18:51.821 "params": { 00:18:51.821 "action_on_timeout": "none", 00:18:51.821 "timeout_us": 0, 00:18:51.821 "timeout_admin_us": 0, 00:18:51.821 "keep_alive_timeout_ms": 10000, 00:18:51.821 "arbitration_burst": 0, 00:18:51.821 "low_priority_weight": 0, 00:18:51.821 "medium_priority_weight": 0, 00:18:51.821 "high_priority_weight": 0, 00:18:51.821 "nvme_adminq_poll_period_us": 10000, 00:18:51.821 "nvme_ioq_poll_period_us": 0, 00:18:51.821 "io_queue_requests": 0, 00:18:51.821 "delay_cmd_submit": true, 00:18:51.821 "transport_retry_count": 4, 00:18:51.821 "bdev_retry_count": 3, 00:18:51.821 "transport_ack_timeout": 0, 00:18:51.821 "ctrlr_loss_timeout_sec": 0, 00:18:51.821 "reconnect_delay_sec": 0, 00:18:51.821 "fast_io_fail_timeout_sec": 0, 00:18:51.821 "disable_auto_failback": false, 00:18:51.821 "generate_uuids": false, 00:18:51.821 "transport_tos": 0, 00:18:51.821 "nvme_error_stat": false, 00:18:51.821 "rdma_srq_size": 0, 00:18:51.821 "io_path_stat": false, 00:18:51.821 "allow_accel_sequence": false, 00:18:51.821 "rdma_max_cq_size": 0, 00:18:51.821 "rdma_cm_event_timeout_ms": 0, 00:18:51.821 "dhchap_digests": [ 00:18:51.821 "sha256", 00:18:51.821 "sha384", 00:18:51.821 "sha512" 00:18:51.821 ], 00:18:51.821 "dhchap_dhgroups": [ 00:18:51.821 "null", 00:18:51.821 "ffdhe2048", 00:18:51.821 "ffdhe3072", 00:18:51.821 "ffdhe4096", 00:18:51.821 "ffdhe6144", 00:18:51.821 "ffdhe8192" 00:18:51.821 ] 00:18:51.821 } 00:18:51.821 }, 00:18:51.821 { 00:18:51.821 "method": "bdev_nvme_set_hotplug", 00:18:51.821 "params": { 00:18:51.821 "period_us": 100000, 00:18:51.821 "enable": false 00:18:51.821 } 00:18:51.821 }, 00:18:51.821 { 00:18:51.821 "method": "bdev_malloc_create", 00:18:51.821 "params": { 00:18:51.821 "name": "malloc0", 00:18:51.821 "num_blocks": 8192, 00:18:51.821 "block_size": 4096, 00:18:51.821 "physical_block_size": 4096, 00:18:51.821 "uuid": "54514d8b-0ed7-4878-b505-4992722c4341", 00:18:51.821 "optimal_io_boundary": 0, 00:18:51.821 "md_size": 0, 00:18:51.821 "dif_type": 0, 00:18:51.821 "dif_is_head_of_md": false, 00:18:51.821 "dif_pi_format": 0 00:18:51.821 } 00:18:51.821 }, 00:18:51.821 { 00:18:51.821 "method": "bdev_wait_for_examine" 00:18:51.821 } 00:18:51.821 ] 00:18:51.821 }, 00:18:51.821 { 00:18:51.821 "subsystem": "nbd", 00:18:51.821 "config": [] 00:18:51.821 }, 00:18:51.821 { 00:18:51.821 "subsystem": "scheduler", 00:18:51.821 "config": [ 00:18:51.821 { 00:18:51.821 "method": "framework_set_scheduler", 00:18:51.821 "params": { 00:18:51.821 "name": "static" 00:18:51.821 } 00:18:51.821 } 00:18:51.821 ] 00:18:51.821 }, 00:18:51.821 { 00:18:51.821 "subsystem": "nvmf", 00:18:51.821 "config": [ 00:18:51.821 { 00:18:51.821 "method": "nvmf_set_config", 00:18:51.821 "params": { 00:18:51.821 "discovery_filter": "match_any", 00:18:51.821 "admin_cmd_passthru": { 00:18:51.821 "identify_ctrlr": false 00:18:51.821 }, 00:18:51.821 "dhchap_digests": [ 00:18:51.821 "sha256", 00:18:51.821 "sha384", 00:18:51.821 "sha512" 00:18:51.821 ], 00:18:51.821 "dhchap_dhgroups": [ 00:18:51.821 "null", 00:18:51.821 "ffdhe2048", 00:18:51.821 "ffdhe3072", 00:18:51.821 "ffdhe4096", 00:18:51.821 "ffdhe6144", 00:18:51.821 "ffdhe8192" 00:18:51.821 ] 00:18:51.821 } 00:18:51.821 }, 00:18:51.821 { 00:18:51.821 "method": "nvmf_set_max_subsystems", 00:18:51.821 "params": { 00:18:51.821 "max_subsystems": 1024 00:18:51.821 } 00:18:51.821 }, 00:18:51.821 { 00:18:51.821 "method": "nvmf_set_crdt", 00:18:51.821 "params": { 00:18:51.821 "crdt1": 0, 00:18:51.821 "crdt2": 0, 00:18:51.821 "crdt3": 0 00:18:51.821 } 00:18:51.821 }, 00:18:51.821 { 00:18:51.821 "method": "nvmf_create_transport", 00:18:51.821 "params": { 00:18:51.821 "trtype": "TCP", 00:18:51.821 "max_queue_depth": 128, 00:18:51.821 "max_io_qpairs_per_ctrlr": 127, 00:18:51.821 "in_capsule_data_size": 4096, 00:18:51.821 "max_io_size": 131072, 00:18:51.821 "io_unit_size": 131072, 00:18:51.821 "max_aq_depth": 128, 00:18:51.821 "num_shared_buffers": 511, 00:18:51.821 "buf_cache_size": 4294967295, 00:18:51.821 "dif_insert_or_strip": false, 00:18:51.821 "zcopy": false, 00:18:51.821 "c2h_success": false, 00:18:51.821 "sock_priority": 0, 00:18:51.821 "abort_timeout_sec": 1, 00:18:51.821 "ack_timeout": 0, 00:18:51.821 "data_wr_pool_size": 0 00:18:51.821 } 00:18:51.822 }, 00:18:51.822 { 00:18:51.822 "method": "nvmf_create_subsystem", 00:18:51.822 "params": { 00:18:51.822 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.822 "allow_any_host": false, 00:18:51.822 "serial_number": "00000000000000000000", 00:18:51.822 "model_number": "SPDK bdev Controller", 00:18:51.822 "max_namespaces": 32, 00:18:51.822 "min_cntlid": 1, 00:18:51.822 "max_cntlid": 65519, 00:18:51.822 "ana_reporting": false 00:18:51.822 } 00:18:51.822 }, 00:18:51.822 { 00:18:51.822 "method": "nvmf_subsystem_add_host", 00:18:51.822 "params": { 00:18:51.822 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.822 "host": "nqn.2016-06.io.spdk:host1", 00:18:51.822 "psk": "key0" 00:18:51.822 } 00:18:51.822 }, 00:18:51.822 { 00:18:51.822 "method": "nvmf_subsystem_add_ns", 00:18:51.822 "params": { 00:18:51.822 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.822 "namespace": { 00:18:51.822 "nsid": 1, 00:18:51.822 "bdev_name": "malloc0", 00:18:51.822 "nguid": "54514D8B0ED74878B5054992722C4341", 00:18:51.822 "uuid": "54514d8b-0ed7-4878-b505-4992722c4341", 00:18:51.822 "no_auto_visible": false 00:18:51.822 } 00:18:51.822 } 00:18:51.822 }, 00:18:51.822 { 00:18:51.822 "method": "nvmf_subsystem_add_listener", 00:18:51.822 "params": { 00:18:51.822 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.822 "listen_address": { 00:18:51.822 "trtype": "TCP", 00:18:51.822 "adrfam": "IPv4", 00:18:51.822 "traddr": "10.0.0.2", 00:18:51.822 "trsvcid": "4420" 00:18:51.822 }, 00:18:51.822 "secure_channel": false, 00:18:51.822 "sock_impl": "ssl" 00:18:51.822 } 00:18:51.822 } 00:18:51.822 ] 00:18:51.822 } 00:18:51.822 ] 00:18:51.822 }' 00:18:51.822 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:52.081 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:52.081 "subsystems": [ 00:18:52.081 { 00:18:52.081 "subsystem": "keyring", 00:18:52.081 "config": [ 00:18:52.081 { 00:18:52.081 "method": "keyring_file_add_key", 00:18:52.081 "params": { 00:18:52.081 "name": "key0", 00:18:52.081 "path": "/tmp/tmp.V4UOsF7x0p" 00:18:52.081 } 00:18:52.081 } 00:18:52.081 ] 00:18:52.081 }, 00:18:52.081 { 00:18:52.081 "subsystem": "iobuf", 00:18:52.081 "config": [ 00:18:52.081 { 00:18:52.081 "method": "iobuf_set_options", 00:18:52.081 "params": { 00:18:52.081 "small_pool_count": 8192, 00:18:52.081 "large_pool_count": 1024, 00:18:52.081 "small_bufsize": 8192, 00:18:52.081 "large_bufsize": 135168, 00:18:52.081 "enable_numa": false 00:18:52.081 } 00:18:52.081 } 00:18:52.081 ] 00:18:52.081 }, 00:18:52.081 { 00:18:52.081 "subsystem": "sock", 00:18:52.081 "config": [ 00:18:52.081 { 00:18:52.081 "method": "sock_set_default_impl", 00:18:52.081 "params": { 00:18:52.081 "impl_name": "posix" 00:18:52.081 } 00:18:52.081 }, 00:18:52.081 { 00:18:52.082 "method": "sock_impl_set_options", 00:18:52.082 "params": { 00:18:52.082 "impl_name": "ssl", 00:18:52.082 "recv_buf_size": 4096, 00:18:52.082 "send_buf_size": 4096, 00:18:52.082 "enable_recv_pipe": true, 00:18:52.082 "enable_quickack": false, 00:18:52.082 "enable_placement_id": 0, 00:18:52.082 "enable_zerocopy_send_server": true, 00:18:52.082 "enable_zerocopy_send_client": false, 00:18:52.082 "zerocopy_threshold": 0, 00:18:52.082 "tls_version": 0, 00:18:52.082 "enable_ktls": false 00:18:52.082 } 00:18:52.082 }, 00:18:52.082 { 00:18:52.082 "method": "sock_impl_set_options", 00:18:52.082 "params": { 00:18:52.082 "impl_name": "posix", 00:18:52.082 "recv_buf_size": 2097152, 00:18:52.082 "send_buf_size": 2097152, 00:18:52.082 "enable_recv_pipe": true, 00:18:52.082 "enable_quickack": false, 00:18:52.082 "enable_placement_id": 0, 00:18:52.082 "enable_zerocopy_send_server": true, 00:18:52.082 "enable_zerocopy_send_client": false, 00:18:52.082 "zerocopy_threshold": 0, 00:18:52.082 "tls_version": 0, 00:18:52.082 "enable_ktls": false 00:18:52.082 } 00:18:52.082 } 00:18:52.082 ] 00:18:52.082 }, 00:18:52.082 { 00:18:52.082 "subsystem": "vmd", 00:18:52.082 "config": [] 00:18:52.082 }, 00:18:52.082 { 00:18:52.082 "subsystem": "accel", 00:18:52.082 "config": [ 00:18:52.082 { 00:18:52.082 "method": "accel_set_options", 00:18:52.082 "params": { 00:18:52.082 "small_cache_size": 128, 00:18:52.082 "large_cache_size": 16, 00:18:52.082 "task_count": 2048, 00:18:52.082 "sequence_count": 2048, 00:18:52.082 "buf_count": 2048 00:18:52.082 } 00:18:52.082 } 00:18:52.082 ] 00:18:52.082 }, 00:18:52.082 { 00:18:52.082 "subsystem": "bdev", 00:18:52.082 "config": [ 00:18:52.082 { 00:18:52.082 "method": "bdev_set_options", 00:18:52.082 "params": { 00:18:52.082 "bdev_io_pool_size": 65535, 00:18:52.082 "bdev_io_cache_size": 256, 00:18:52.082 "bdev_auto_examine": true, 00:18:52.082 "iobuf_small_cache_size": 128, 00:18:52.082 "iobuf_large_cache_size": 16 00:18:52.082 } 00:18:52.082 }, 00:18:52.082 { 00:18:52.082 "method": "bdev_raid_set_options", 00:18:52.082 "params": { 00:18:52.082 "process_window_size_kb": 1024, 00:18:52.082 "process_max_bandwidth_mb_sec": 0 00:18:52.082 } 00:18:52.082 }, 00:18:52.082 { 00:18:52.082 "method": "bdev_iscsi_set_options", 00:18:52.082 "params": { 00:18:52.082 "timeout_sec": 30 00:18:52.082 } 00:18:52.082 }, 00:18:52.082 { 00:18:52.082 "method": "bdev_nvme_set_options", 00:18:52.082 "params": { 00:18:52.082 "action_on_timeout": "none", 00:18:52.082 "timeout_us": 0, 00:18:52.082 "timeout_admin_us": 0, 00:18:52.082 "keep_alive_timeout_ms": 10000, 00:18:52.082 "arbitration_burst": 0, 00:18:52.082 "low_priority_weight": 0, 00:18:52.082 "medium_priority_weight": 0, 00:18:52.082 "high_priority_weight": 0, 00:18:52.082 "nvme_adminq_poll_period_us": 10000, 00:18:52.082 "nvme_ioq_poll_period_us": 0, 00:18:52.082 "io_queue_requests": 512, 00:18:52.082 "delay_cmd_submit": true, 00:18:52.082 "transport_retry_count": 4, 00:18:52.082 "bdev_retry_count": 3, 00:18:52.082 "transport_ack_timeout": 0, 00:18:52.082 "ctrlr_loss_timeout_sec": 0, 00:18:52.082 "reconnect_delay_sec": 0, 00:18:52.082 "fast_io_fail_timeout_sec": 0, 00:18:52.082 "disable_auto_failback": false, 00:18:52.082 "generate_uuids": false, 00:18:52.082 "transport_tos": 0, 00:18:52.082 "nvme_error_stat": false, 00:18:52.082 "rdma_srq_size": 0, 00:18:52.082 "io_path_stat": false, 00:18:52.082 "allow_accel_sequence": false, 00:18:52.082 "rdma_max_cq_size": 0, 00:18:52.082 "rdma_cm_event_timeout_ms": 0, 00:18:52.082 "dhchap_digests": [ 00:18:52.082 "sha256", 00:18:52.082 "sha384", 00:18:52.082 "sha512" 00:18:52.082 ], 00:18:52.082 "dhchap_dhgroups": [ 00:18:52.082 "null", 00:18:52.082 "ffdhe2048", 00:18:52.082 "ffdhe3072", 00:18:52.082 "ffdhe4096", 00:18:52.082 "ffdhe6144", 00:18:52.082 "ffdhe8192" 00:18:52.082 ] 00:18:52.082 } 00:18:52.082 }, 00:18:52.082 { 00:18:52.082 "method": "bdev_nvme_attach_controller", 00:18:52.082 "params": { 00:18:52.082 "name": "nvme0", 00:18:52.082 "trtype": "TCP", 00:18:52.082 "adrfam": "IPv4", 00:18:52.082 "traddr": "10.0.0.2", 00:18:52.082 "trsvcid": "4420", 00:18:52.082 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.082 "prchk_reftag": false, 00:18:52.082 "prchk_guard": false, 00:18:52.082 "ctrlr_loss_timeout_sec": 0, 00:18:52.082 "reconnect_delay_sec": 0, 00:18:52.082 "fast_io_fail_timeout_sec": 0, 00:18:52.082 "psk": "key0", 00:18:52.082 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.082 "hdgst": false, 00:18:52.082 "ddgst": false, 00:18:52.082 "multipath": "multipath" 00:18:52.082 } 00:18:52.082 }, 00:18:52.082 { 00:18:52.082 "method": "bdev_nvme_set_hotplug", 00:18:52.082 "params": { 00:18:52.082 "period_us": 100000, 00:18:52.082 "enable": false 00:18:52.082 } 00:18:52.082 }, 00:18:52.082 { 00:18:52.082 "method": "bdev_enable_histogram", 00:18:52.082 "params": { 00:18:52.082 "name": "nvme0n1", 00:18:52.082 "enable": true 00:18:52.082 } 00:18:52.082 }, 00:18:52.082 { 00:18:52.082 "method": "bdev_wait_for_examine" 00:18:52.082 } 00:18:52.082 ] 00:18:52.082 }, 00:18:52.082 { 00:18:52.082 "subsystem": "nbd", 00:18:52.082 "config": [] 00:18:52.082 } 00:18:52.082 ] 00:18:52.082 }' 00:18:52.082 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1486130 00:18:52.082 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1486130 ']' 00:18:52.082 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1486130 00:18:52.082 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.082 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.082 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1486130 00:18:52.082 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:52.082 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:52.082 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1486130' 00:18:52.082 killing process with pid 1486130 00:18:52.082 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1486130 00:18:52.082 Received shutdown signal, test time was about 1.000000 seconds 00:18:52.082 00:18:52.082 Latency(us) 00:18:52.082 [2024-11-17T13:28:41.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.082 [2024-11-17T13:28:41.307Z] =================================================================================================================== 00:18:52.082 [2024-11-17T13:28:41.307Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:52.082 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1486130 00:18:52.342 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1486029 00:18:52.342 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1486029 ']' 00:18:52.342 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1486029 00:18:52.342 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.342 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.342 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1486029 00:18:52.342 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:52.342 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:52.342 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1486029' 00:18:52.342 killing process with pid 1486029 00:18:52.342 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1486029 00:18:52.342 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1486029 00:18:52.342 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:52.342 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:52.342 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.342 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:52.342 "subsystems": [ 00:18:52.342 { 00:18:52.342 "subsystem": "keyring", 00:18:52.342 "config": [ 00:18:52.342 { 00:18:52.342 "method": "keyring_file_add_key", 00:18:52.342 "params": { 00:18:52.342 "name": "key0", 00:18:52.342 "path": "/tmp/tmp.V4UOsF7x0p" 00:18:52.342 } 00:18:52.342 } 00:18:52.342 ] 00:18:52.342 }, 00:18:52.342 { 00:18:52.342 "subsystem": "iobuf", 00:18:52.342 "config": [ 00:18:52.342 { 00:18:52.342 "method": "iobuf_set_options", 00:18:52.342 "params": { 00:18:52.342 "small_pool_count": 8192, 00:18:52.342 "large_pool_count": 1024, 00:18:52.342 "small_bufsize": 8192, 00:18:52.342 "large_bufsize": 135168, 00:18:52.343 "enable_numa": false 00:18:52.343 } 00:18:52.343 } 00:18:52.343 ] 00:18:52.343 }, 00:18:52.343 { 00:18:52.343 "subsystem": "sock", 00:18:52.343 "config": [ 00:18:52.343 { 00:18:52.343 "method": "sock_set_default_impl", 00:18:52.343 "params": { 00:18:52.343 "impl_name": "posix" 00:18:52.343 } 00:18:52.343 }, 00:18:52.343 { 00:18:52.343 "method": "sock_impl_set_options", 00:18:52.343 "params": { 00:18:52.343 "impl_name": "ssl", 00:18:52.343 "recv_buf_size": 4096, 00:18:52.343 "send_buf_size": 4096, 00:18:52.343 "enable_recv_pipe": true, 00:18:52.343 "enable_quickack": false, 00:18:52.343 "enable_placement_id": 0, 00:18:52.343 "enable_zerocopy_send_server": true, 00:18:52.343 "enable_zerocopy_send_client": false, 00:18:52.343 "zerocopy_threshold": 0, 00:18:52.343 "tls_version": 0, 00:18:52.343 "enable_ktls": false 00:18:52.343 } 00:18:52.343 }, 00:18:52.343 { 00:18:52.343 "method": "sock_impl_set_options", 00:18:52.343 "params": { 00:18:52.343 "impl_name": "posix", 00:18:52.343 "recv_buf_size": 2097152, 00:18:52.343 "send_buf_size": 2097152, 00:18:52.343 "enable_recv_pipe": true, 00:18:52.343 "enable_quickack": false, 00:18:52.343 "enable_placement_id": 0, 00:18:52.343 "enable_zerocopy_send_server": true, 00:18:52.343 "enable_zerocopy_send_client": false, 00:18:52.343 "zerocopy_threshold": 0, 00:18:52.343 "tls_version": 0, 00:18:52.343 "enable_ktls": false 00:18:52.343 } 00:18:52.343 } 00:18:52.343 ] 00:18:52.343 }, 00:18:52.343 { 00:18:52.343 "subsystem": "vmd", 00:18:52.343 "config": [] 00:18:52.343 }, 00:18:52.343 { 00:18:52.343 "subsystem": "accel", 00:18:52.343 "config": [ 00:18:52.343 { 00:18:52.343 "method": "accel_set_options", 00:18:52.343 "params": { 00:18:52.343 "small_cache_size": 128, 00:18:52.343 "large_cache_size": 16, 00:18:52.343 "task_count": 2048, 00:18:52.343 "sequence_count": 2048, 00:18:52.343 "buf_count": 2048 00:18:52.343 } 00:18:52.343 } 00:18:52.343 ] 00:18:52.343 }, 00:18:52.343 { 00:18:52.343 "subsystem": "bdev", 00:18:52.343 "config": [ 00:18:52.343 { 00:18:52.343 "method": "bdev_set_options", 00:18:52.343 "params": { 00:18:52.343 "bdev_io_pool_size": 65535, 00:18:52.343 "bdev_io_cache_size": 256, 00:18:52.343 "bdev_auto_examine": true, 00:18:52.343 "iobuf_small_cache_size": 128, 00:18:52.343 "iobuf_large_cache_size": 16 00:18:52.343 } 00:18:52.343 }, 00:18:52.343 { 00:18:52.343 "method": "bdev_raid_set_options", 00:18:52.343 "params": { 00:18:52.343 "process_window_size_kb": 1024, 00:18:52.343 "process_max_bandwidth_mb_sec": 0 00:18:52.343 } 00:18:52.343 }, 00:18:52.343 { 00:18:52.343 "method": "bdev_iscsi_set_options", 00:18:52.343 "params": { 00:18:52.343 "timeout_sec": 30 00:18:52.343 } 00:18:52.343 }, 00:18:52.343 { 00:18:52.343 "method": "bdev_nvme_set_options", 00:18:52.343 "params": { 00:18:52.343 "action_on_timeout": "none", 00:18:52.343 "timeout_us": 0, 00:18:52.343 "timeout_admin_us": 0, 00:18:52.343 "keep_alive_timeout_ms": 10000, 00:18:52.343 "arbitration_burst": 0, 00:18:52.343 "low_priority_weight": 0, 00:18:52.343 "medium_priority_weight": 0, 00:18:52.343 "high_priority_weight": 0, 00:18:52.343 "nvme_adminq_poll_period_us": 10000, 00:18:52.343 "nvme_ioq_poll_period_us": 0, 00:18:52.343 "io_queue_requests": 0, 00:18:52.343 "delay_cmd_submit": true, 00:18:52.343 "transport_retry_count": 4, 00:18:52.343 "bdev_retry_count": 3, 00:18:52.343 "transport_ack_timeout": 0, 00:18:52.343 "ctrlr_loss_timeout_sec": 0, 00:18:52.343 "reconnect_delay_sec": 0, 00:18:52.343 "fast_io_fail_timeout_sec": 0, 00:18:52.343 "disable_auto_failback": false, 00:18:52.343 "generate_uuids": false, 00:18:52.343 "transport_tos": 0, 00:18:52.343 "nvme_error_stat": false, 00:18:52.343 "rdma_srq_size": 0, 00:18:52.343 "io_path_stat": false, 00:18:52.343 "allow_accel_sequence": false, 00:18:52.343 "rdma_max_cq_size": 0, 00:18:52.343 "rdma_cm_event_timeout_ms": 0, 00:18:52.343 "dhchap_digests": [ 00:18:52.343 "sha256", 00:18:52.343 "sha384", 00:18:52.343 "sha512" 00:18:52.343 ], 00:18:52.343 "dhchap_dhgroups": [ 00:18:52.343 "null", 00:18:52.343 "ffdhe2048", 00:18:52.343 "ffdhe3072", 00:18:52.343 "ffdhe4096", 00:18:52.343 "ffdhe6144", 00:18:52.343 "ffdhe8192" 00:18:52.343 ] 00:18:52.343 } 00:18:52.343 }, 00:18:52.343 { 00:18:52.343 "method": "bdev_nvme_set_hotplug", 00:18:52.343 "params": { 00:18:52.343 "period_us": 100000, 00:18:52.343 "enable": false 00:18:52.343 } 00:18:52.343 }, 00:18:52.343 { 00:18:52.343 "method": "bdev_malloc_create", 00:18:52.343 "params": { 00:18:52.343 "name": "malloc0", 00:18:52.343 "num_blocks": 8192, 00:18:52.343 "block_size": 4096, 00:18:52.343 "physical_block_size": 4096, 00:18:52.343 "uuid": "54514d8b-0ed7-4878-b505-4992722c4341", 00:18:52.343 "optimal_io_boundary": 0, 00:18:52.343 "md_size": 0, 00:18:52.343 "dif_type": 0, 00:18:52.343 "dif_is_head_of_md": false, 00:18:52.343 "dif_pi_format": 0 00:18:52.343 } 00:18:52.343 }, 00:18:52.343 { 00:18:52.343 "method": "bdev_wait_for_examine" 00:18:52.343 } 00:18:52.343 ] 00:18:52.343 }, 00:18:52.343 { 00:18:52.343 "subsystem": "nbd", 00:18:52.343 "config": [] 00:18:52.343 }, 00:18:52.343 { 00:18:52.343 "subsystem": "scheduler", 00:18:52.343 "config": [ 00:18:52.343 { 00:18:52.343 "method": "framework_set_scheduler", 00:18:52.343 "params": { 00:18:52.343 "name": "static" 00:18:52.343 } 00:18:52.343 } 00:18:52.343 ] 00:18:52.343 }, 00:18:52.343 { 00:18:52.343 "subsystem": "nvmf", 00:18:52.343 "config": [ 00:18:52.343 { 00:18:52.343 "method": "nvmf_set_config", 00:18:52.343 "params": { 00:18:52.343 "discovery_filter": "match_any", 00:18:52.343 "admin_cmd_passthru": { 00:18:52.343 "identify_ctrlr": false 00:18:52.343 }, 00:18:52.343 "dhchap_digests": [ 00:18:52.343 "sha256", 00:18:52.343 "sha384", 00:18:52.343 "sha512" 00:18:52.343 ], 00:18:52.343 "dhchap_dhgroups": [ 00:18:52.343 "null", 00:18:52.343 "ffdhe2048", 00:18:52.343 "ffdhe3072", 00:18:52.343 "ffdhe4096", 00:18:52.343 "ffdhe6144", 00:18:52.343 "ffdhe8192" 00:18:52.343 ] 00:18:52.343 } 00:18:52.343 }, 00:18:52.343 { 00:18:52.343 "method": "nvmf_set_max_subsystems", 00:18:52.343 "params": { 00:18:52.343 "max_subsystems": 1024 00:18:52.343 } 00:18:52.343 }, 00:18:52.343 { 00:18:52.343 "method": "nvmf_set_crdt", 00:18:52.343 "params": { 00:18:52.343 "crdt1": 0, 00:18:52.343 "crdt2": 0, 00:18:52.343 "crdt3": 0 00:18:52.343 } 00:18:52.343 }, 00:18:52.343 { 00:18:52.343 "method": "nvmf_create_transport", 00:18:52.343 "params": { 00:18:52.343 "trtype": "TCP", 00:18:52.343 "max_queue_depth": 128, 00:18:52.343 "max_io_qpairs_per_ctrlr": 127, 00:18:52.343 "in_capsule_data_size": 4096, 00:18:52.343 "max_io_size": 131072, 00:18:52.343 "io_unit_size": 131072, 00:18:52.343 "max_aq_depth": 128, 00:18:52.343 "num_shared_buffers": 511, 00:18:52.343 "buf_cache_size": 4294967295, 00:18:52.343 "dif_insert_or_strip": false, 00:18:52.343 "zcopy": false, 00:18:52.343 "c2h_success": false, 00:18:52.343 "sock_priority": 0, 00:18:52.343 "abort_timeout_sec": 1, 00:18:52.343 "ack_timeout": 0, 00:18:52.343 "data_wr_pool_size": 0 00:18:52.343 } 00:18:52.343 }, 00:18:52.343 { 00:18:52.343 "method": "nvmf_create_subsystem", 00:18:52.343 "params": { 00:18:52.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.343 "allow_any_host": false, 00:18:52.343 "serial_number": "00000000000000000000", 00:18:52.343 "model_number": "SPDK bdev Controller", 00:18:52.343 "max_namespaces": 32, 00:18:52.343 "min_cntlid": 1, 00:18:52.343 "max_cntlid": 65519, 00:18:52.344 "ana_reporting": false 00:18:52.344 } 00:18:52.344 }, 00:18:52.344 { 00:18:52.344 "method": "nvmf_subsystem_add_host", 00:18:52.344 "params": { 00:18:52.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.344 "host": "nqn.2016-06.io.spdk:host1", 00:18:52.344 "psk": "key0" 00:18:52.344 } 00:18:52.344 }, 00:18:52.344 { 00:18:52.344 "method": "nvmf_subsystem_add_ns", 00:18:52.344 "params": { 00:18:52.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.344 "namespace": { 00:18:52.344 "nsid": 1, 00:18:52.344 "bdev_name": "malloc0", 00:18:52.344 "nguid": "54514D8B0ED74878B5054992722C4341", 00:18:52.344 "uuid": "54514d8b-0ed7-4878-b505-4992722c4341", 00:18:52.344 "no_auto_visible": false 00:18:52.344 } 00:18:52.344 } 00:18:52.344 }, 00:18:52.344 { 00:18:52.344 "method": "nvmf_subsystem_add_listener", 00:18:52.344 "params": { 00:18:52.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.344 "listen_address": { 00:18:52.344 "trtype": "TCP", 00:18:52.344 "adrfam": "IPv4", 00:18:52.344 "traddr": "10.0.0.2", 00:18:52.344 "trsvcid": "4420" 00:18:52.344 }, 00:18:52.344 "secure_channel": false, 00:18:52.344 "sock_impl": "ssl" 00:18:52.344 } 00:18:52.344 } 00:18:52.344 ] 00:18:52.344 } 00:18:52.344 ] 00:18:52.344 }' 00:18:52.344 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.603 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1486528 00:18:52.603 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:52.603 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1486528 00:18:52.603 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1486528 ']' 00:18:52.603 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.603 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.603 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.603 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.603 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.603 [2024-11-17 14:28:41.611495] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:52.603 [2024-11-17 14:28:41.611543] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.603 [2024-11-17 14:28:41.692545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.603 [2024-11-17 14:28:41.732857] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.603 [2024-11-17 14:28:41.732894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.603 [2024-11-17 14:28:41.732902] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.603 [2024-11-17 14:28:41.732908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.603 [2024-11-17 14:28:41.732913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.603 [2024-11-17 14:28:41.733519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.862 [2024-11-17 14:28:41.944865] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.862 [2024-11-17 14:28:41.976901] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:52.862 [2024-11-17 14:28:41.977107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.431 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.431 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:53.431 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.431 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.431 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.431 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.431 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1486770 00:18:53.431 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1486770 /var/tmp/bdevperf.sock 00:18:53.431 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1486770 ']' 00:18:53.431 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.431 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:53.431 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.431 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.431 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:53.431 "subsystems": [ 00:18:53.431 { 00:18:53.431 "subsystem": "keyring", 00:18:53.431 "config": [ 00:18:53.431 { 00:18:53.431 "method": "keyring_file_add_key", 00:18:53.431 "params": { 00:18:53.431 "name": "key0", 00:18:53.431 "path": "/tmp/tmp.V4UOsF7x0p" 00:18:53.431 } 00:18:53.431 } 00:18:53.431 ] 00:18:53.431 }, 00:18:53.431 { 00:18:53.431 "subsystem": "iobuf", 00:18:53.431 "config": [ 00:18:53.431 { 00:18:53.431 "method": "iobuf_set_options", 00:18:53.431 "params": { 00:18:53.431 "small_pool_count": 8192, 00:18:53.431 "large_pool_count": 1024, 00:18:53.431 "small_bufsize": 8192, 00:18:53.431 "large_bufsize": 135168, 00:18:53.431 "enable_numa": false 00:18:53.431 } 00:18:53.431 } 00:18:53.431 ] 00:18:53.431 }, 00:18:53.431 { 00:18:53.431 "subsystem": "sock", 00:18:53.431 "config": [ 00:18:53.431 { 00:18:53.431 "method": "sock_set_default_impl", 00:18:53.431 "params": { 00:18:53.431 "impl_name": "posix" 00:18:53.431 } 00:18:53.431 }, 00:18:53.431 { 00:18:53.431 "method": "sock_impl_set_options", 00:18:53.431 "params": { 00:18:53.431 "impl_name": "ssl", 00:18:53.431 "recv_buf_size": 4096, 00:18:53.431 "send_buf_size": 4096, 00:18:53.431 "enable_recv_pipe": true, 00:18:53.431 "enable_quickack": false, 00:18:53.431 "enable_placement_id": 0, 00:18:53.431 "enable_zerocopy_send_server": true, 00:18:53.431 "enable_zerocopy_send_client": false, 00:18:53.431 "zerocopy_threshold": 0, 00:18:53.431 "tls_version": 0, 00:18:53.431 "enable_ktls": false 00:18:53.431 } 00:18:53.431 }, 00:18:53.431 { 00:18:53.431 "method": "sock_impl_set_options", 00:18:53.431 "params": { 00:18:53.431 "impl_name": "posix", 00:18:53.431 "recv_buf_size": 2097152, 00:18:53.431 "send_buf_size": 2097152, 00:18:53.431 "enable_recv_pipe": true, 00:18:53.431 "enable_quickack": false, 00:18:53.431 "enable_placement_id": 0, 00:18:53.431 "enable_zerocopy_send_server": true, 00:18:53.431 "enable_zerocopy_send_client": false, 00:18:53.431 "zerocopy_threshold": 0, 00:18:53.431 "tls_version": 0, 00:18:53.431 "enable_ktls": false 00:18:53.431 } 00:18:53.431 } 00:18:53.431 ] 00:18:53.431 }, 00:18:53.431 { 00:18:53.431 "subsystem": "vmd", 00:18:53.431 "config": [] 00:18:53.431 }, 00:18:53.431 { 00:18:53.431 "subsystem": "accel", 00:18:53.431 "config": [ 00:18:53.431 { 00:18:53.431 "method": "accel_set_options", 00:18:53.431 "params": { 00:18:53.431 "small_cache_size": 128, 00:18:53.431 "large_cache_size": 16, 00:18:53.431 "task_count": 2048, 00:18:53.431 "sequence_count": 2048, 00:18:53.431 "buf_count": 2048 00:18:53.431 } 00:18:53.431 } 00:18:53.431 ] 00:18:53.431 }, 00:18:53.431 { 00:18:53.431 "subsystem": "bdev", 00:18:53.431 "config": [ 00:18:53.431 { 00:18:53.431 "method": "bdev_set_options", 00:18:53.431 "params": { 00:18:53.431 "bdev_io_pool_size": 65535, 00:18:53.431 "bdev_io_cache_size": 256, 00:18:53.431 "bdev_auto_examine": true, 00:18:53.431 "iobuf_small_cache_size": 128, 00:18:53.431 "iobuf_large_cache_size": 16 00:18:53.431 } 00:18:53.431 }, 00:18:53.431 { 00:18:53.431 "method": "bdev_raid_set_options", 00:18:53.431 "params": { 00:18:53.431 "process_window_size_kb": 1024, 00:18:53.431 "process_max_bandwidth_mb_sec": 0 00:18:53.431 } 00:18:53.431 }, 00:18:53.431 { 00:18:53.431 "method": "bdev_iscsi_set_options", 00:18:53.431 "params": { 00:18:53.431 "timeout_sec": 30 00:18:53.431 } 00:18:53.431 }, 00:18:53.431 { 00:18:53.431 "method": "bdev_nvme_set_options", 00:18:53.431 "params": { 00:18:53.431 "action_on_timeout": "none", 00:18:53.431 "timeout_us": 0, 00:18:53.432 "timeout_admin_us": 0, 00:18:53.432 "keep_alive_timeout_ms": 10000, 00:18:53.432 "arbitration_burst": 0, 00:18:53.432 "low_priority_weight": 0, 00:18:53.432 "medium_priority_weight": 0, 00:18:53.432 "high_priority_weight": 0, 00:18:53.432 "nvme_adminq_poll_period_us": 10000, 00:18:53.432 "nvme_ioq_poll_period_us": 0, 00:18:53.432 "io_queue_requests": 512, 00:18:53.432 "delay_cmd_submit": true, 00:18:53.432 "transport_retry_count": 4, 00:18:53.432 "bdev_retry_count": 3, 00:18:53.432 "transport_ack_timeout": 0, 00:18:53.432 "ctrlr_loss_timeout_sec": 0, 00:18:53.432 "reconnect_delay_sec": 0, 00:18:53.432 "fast_io_fail_timeout_sec": 0, 00:18:53.432 "disable_auto_failback": false, 00:18:53.432 "generate_uuids": false, 00:18:53.432 "transport_tos": 0, 00:18:53.432 "nvme_error_stat": false, 00:18:53.432 "rdma_srq_size": 0, 00:18:53.432 "io_path_stat": false, 00:18:53.432 "allow_accel_sequence": false, 00:18:53.432 "rdma_max_cq_size": 0, 00:18:53.432 "rdma_cm_event_timeout_ms": 0, 00:18:53.432 "dhchap_digests": [ 00:18:53.432 "sha256", 00:18:53.432 "sha384", 00:18:53.432 "sha512" 00:18:53.432 ], 00:18:53.432 "dhchap_dhgroups": [ 00:18:53.432 "null", 00:18:53.432 "ffdhe2048", 00:18:53.432 "ffdhe3072", 00:18:53.432 "ffdhe4096", 00:18:53.432 "ffdhe6144", 00:18:53.432 "ffdhe8192" 00:18:53.432 ] 00:18:53.432 } 00:18:53.432 }, 00:18:53.432 { 00:18:53.432 "method": "bdev_nvme_attach_controller", 00:18:53.432 "params": { 00:18:53.432 "name": "nvme0", 00:18:53.432 "trtype": "TCP", 00:18:53.432 "adrfam": "IPv4", 00:18:53.432 "traddr": "10.0.0.2", 00:18:53.432 "trsvcid": "4420", 00:18:53.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.432 "prchk_reftag": false, 00:18:53.432 "prchk_guard": false, 00:18:53.432 "ctrlr_loss_timeout_sec": 0, 00:18:53.432 "reconnect_delay_sec": 0, 00:18:53.432 "fast_io_fail_timeout_sec": 0, 00:18:53.432 "psk": "key0", 00:18:53.432 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:53.432 "hdgst": false, 00:18:53.432 "ddgst": false, 00:18:53.432 "multipath": "multipath" 00:18:53.432 } 00:18:53.432 }, 00:18:53.432 { 00:18:53.432 "method": "bdev_nvme_set_hotplug", 00:18:53.432 "params": { 00:18:53.432 "period_us": 100000, 00:18:53.432 "enable": false 00:18:53.432 } 00:18:53.432 }, 00:18:53.432 { 00:18:53.432 "method": "bdev_enable_histogram", 00:18:53.432 "params": { 00:18:53.432 "name": "nvme0n1", 00:18:53.432 "enable": true 00:18:53.432 } 00:18:53.432 }, 00:18:53.432 { 00:18:53.432 "method": "bdev_wait_for_examine" 00:18:53.432 } 00:18:53.432 ] 00:18:53.432 }, 00:18:53.432 { 00:18:53.432 "subsystem": "nbd", 00:18:53.432 "config": [] 00:18:53.432 } 00:18:53.432 ] 00:18:53.432 }' 00:18:53.432 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.432 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.432 [2024-11-17 14:28:42.529570] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:53.432 [2024-11-17 14:28:42.529616] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486770 ] 00:18:53.432 [2024-11-17 14:28:42.603083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.432 [2024-11-17 14:28:42.643463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.692 [2024-11-17 14:28:42.796595] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.260 14:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.260 14:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:54.260 14:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:54.260 14:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:54.519 14:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.519 14:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:54.519 Running I/O for 1 seconds... 00:18:55.456 5174.00 IOPS, 20.21 MiB/s 00:18:55.456 Latency(us) 00:18:55.456 [2024-11-17T13:28:44.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.456 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:55.456 Verification LBA range: start 0x0 length 0x2000 00:18:55.456 nvme0n1 : 1.02 5220.24 20.39 0.00 0.00 24356.79 5869.75 33052.94 00:18:55.456 [2024-11-17T13:28:44.681Z] =================================================================================================================== 00:18:55.456 [2024-11-17T13:28:44.681Z] Total : 5220.24 20.39 0.00 0.00 24356.79 5869.75 33052.94 00:18:55.456 { 00:18:55.456 "results": [ 00:18:55.456 { 00:18:55.456 "job": "nvme0n1", 00:18:55.456 "core_mask": "0x2", 00:18:55.456 "workload": "verify", 00:18:55.456 "status": "finished", 00:18:55.456 "verify_range": { 00:18:55.456 "start": 0, 00:18:55.456 "length": 8192 00:18:55.456 }, 00:18:55.456 "queue_depth": 128, 00:18:55.456 "io_size": 4096, 00:18:55.456 "runtime": 1.015662, 00:18:55.456 "iops": 5220.240591850438, 00:18:55.456 "mibps": 20.391564811915774, 00:18:55.456 "io_failed": 0, 00:18:55.456 "io_timeout": 0, 00:18:55.456 "avg_latency_us": 24356.787115608546, 00:18:55.456 "min_latency_us": 5869.746086956522, 00:18:55.456 "max_latency_us": 33052.93913043478 00:18:55.456 } 00:18:55.456 ], 00:18:55.456 "core_count": 1 00:18:55.456 } 00:18:55.715 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:55.715 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:55.715 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:55.715 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:55.715 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:55.715 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:55.715 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:55.716 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:55.716 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:55.716 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:55.716 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:55.716 nvmf_trace.0 00:18:55.716 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:55.716 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1486770 00:18:55.716 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1486770 ']' 00:18:55.716 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1486770 00:18:55.716 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:55.716 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.716 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1486770 00:18:55.716 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:55.716 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:55.716 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1486770' 00:18:55.716 killing process with pid 1486770 00:18:55.716 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1486770 00:18:55.716 Received shutdown signal, test time was about 1.000000 seconds 00:18:55.716 00:18:55.716 Latency(us) 00:18:55.716 [2024-11-17T13:28:44.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.716 [2024-11-17T13:28:44.941Z] =================================================================================================================== 00:18:55.716 [2024-11-17T13:28:44.941Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:55.716 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1486770 00:18:55.976 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:55.976 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:55.976 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:55.976 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:55.976 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:55.976 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:55.976 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:55.976 rmmod nvme_tcp 00:18:55.976 rmmod nvme_fabrics 00:18:55.976 rmmod nvme_keyring 00:18:55.976 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:55.976 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:55.976 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:55.976 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1486528 ']' 00:18:55.976 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1486528 00:18:55.976 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1486528 ']' 00:18:55.976 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1486528 00:18:55.976 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:55.976 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.976 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1486528 00:18:55.976 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.976 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.976 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1486528' 00:18:55.976 killing process with pid 1486528 00:18:55.976 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1486528 00:18:55.976 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1486528 00:18:56.235 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:56.235 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:56.235 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:56.235 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:56.235 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:56.235 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:56.235 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:56.235 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:56.235 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:56.235 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.235 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:56.235 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.142 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:58.142 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.pp7zSJ8e3K /tmp/tmp.1KT3KpzXkv /tmp/tmp.V4UOsF7x0p 00:18:58.142 00:18:58.142 real 1m19.254s 00:18:58.142 user 2m1.236s 00:18:58.142 sys 0m30.482s 00:18:58.142 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.142 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.142 ************************************ 00:18:58.142 END TEST nvmf_tls 00:18:58.142 ************************************ 00:18:58.142 14:28:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:58.402 ************************************ 00:18:58.402 START TEST nvmf_fips 00:18:58.402 ************************************ 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:58.402 * Looking for test storage... 00:18:58.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:58.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.402 --rc genhtml_branch_coverage=1 00:18:58.402 --rc genhtml_function_coverage=1 00:18:58.402 --rc genhtml_legend=1 00:18:58.402 --rc geninfo_all_blocks=1 00:18:58.402 --rc geninfo_unexecuted_blocks=1 00:18:58.402 00:18:58.402 ' 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:58.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.402 --rc genhtml_branch_coverage=1 00:18:58.402 --rc genhtml_function_coverage=1 00:18:58.402 --rc genhtml_legend=1 00:18:58.402 --rc geninfo_all_blocks=1 00:18:58.402 --rc geninfo_unexecuted_blocks=1 00:18:58.402 00:18:58.402 ' 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:58.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.402 --rc genhtml_branch_coverage=1 00:18:58.402 --rc genhtml_function_coverage=1 00:18:58.402 --rc genhtml_legend=1 00:18:58.402 --rc geninfo_all_blocks=1 00:18:58.402 --rc geninfo_unexecuted_blocks=1 00:18:58.402 00:18:58.402 ' 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:58.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.402 --rc genhtml_branch_coverage=1 00:18:58.402 --rc genhtml_function_coverage=1 00:18:58.402 --rc genhtml_legend=1 00:18:58.402 --rc geninfo_all_blocks=1 00:18:58.402 --rc geninfo_unexecuted_blocks=1 00:18:58.402 00:18:58.402 ' 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.402 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:58.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:58.403 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:58.663 Error setting digest 00:18:58.663 40424CCEA27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:58.663 40424CCEA27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:58.663 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:58.664 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:58.664 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:58.664 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.664 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:58.664 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:58.664 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:58.664 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.664 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:58.664 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.664 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:58.664 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:58.664 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:18:58.664 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:05.235 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:05.235 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:05.235 Found net devices under 0000:86:00.0: cvl_0_0 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:05.235 Found net devices under 0000:86:00.1: cvl_0_1 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:05.235 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:05.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:05.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:19:05.236 00:19:05.236 --- 10.0.0.2 ping statistics --- 00:19:05.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.236 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:05.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:05.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:19:05.236 00:19:05.236 --- 10.0.0.1 ping statistics --- 00:19:05.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.236 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1490787 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1490787 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1490787 ']' 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.236 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:05.236 [2024-11-17 14:28:53.816114] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:19:05.236 [2024-11-17 14:28:53.816164] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.236 [2024-11-17 14:28:53.897676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.236 [2024-11-17 14:28:53.936058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.236 [2024-11-17 14:28:53.936092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.236 [2024-11-17 14:28:53.936099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:05.236 [2024-11-17 14:28:53.936105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:05.236 [2024-11-17 14:28:53.936110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.236 [2024-11-17 14:28:53.936703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.495 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.495 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:05.495 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:05.495 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:05.495 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:05.495 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.495 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:05.495 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:05.495 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:05.495 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Iih 00:19:05.495 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:05.495 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Iih 00:19:05.495 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Iih 00:19:05.495 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Iih 00:19:05.495 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:05.755 [2024-11-17 14:28:54.877941] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:05.755 [2024-11-17 14:28:54.893950] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:05.755 [2024-11-17 14:28:54.894158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.755 malloc0 00:19:05.755 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:05.755 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1491037 00:19:05.755 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:05.755 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1491037 /var/tmp/bdevperf.sock 00:19:05.755 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1491037 ']' 00:19:05.755 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:05.755 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.755 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:05.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:05.755 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.755 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:06.014 [2024-11-17 14:28:55.023097] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:19:06.014 [2024-11-17 14:28:55.023145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491037 ] 00:19:06.014 [2024-11-17 14:28:55.091923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.014 [2024-11-17 14:28:55.132268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.950 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.950 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:06.950 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Iih 00:19:06.950 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:07.209 [2024-11-17 14:28:56.205059] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:07.209 TLSTESTn1 00:19:07.209 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:07.209 Running I/O for 10 seconds... 00:19:09.524 5257.00 IOPS, 20.54 MiB/s [2024-11-17T13:28:59.686Z] 5317.50 IOPS, 20.77 MiB/s [2024-11-17T13:29:00.623Z] 5363.33 IOPS, 20.95 MiB/s [2024-11-17T13:29:01.559Z] 5389.75 IOPS, 21.05 MiB/s [2024-11-17T13:29:02.496Z] 5383.80 IOPS, 21.03 MiB/s [2024-11-17T13:29:03.433Z] 5372.83 IOPS, 20.99 MiB/s [2024-11-17T13:29:04.813Z] 5387.00 IOPS, 21.04 MiB/s [2024-11-17T13:29:05.750Z] 5388.38 IOPS, 21.05 MiB/s [2024-11-17T13:29:06.688Z] 5388.78 IOPS, 21.05 MiB/s [2024-11-17T13:29:06.688Z] 5356.40 IOPS, 20.92 MiB/s 00:19:17.463 Latency(us) 00:19:17.463 [2024-11-17T13:29:06.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.463 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:17.463 Verification LBA range: start 0x0 length 0x2000 00:19:17.463 TLSTESTn1 : 10.01 5362.36 20.95 0.00 0.00 23836.80 4957.94 28265.96 00:19:17.463 [2024-11-17T13:29:06.688Z] =================================================================================================================== 00:19:17.463 [2024-11-17T13:29:06.688Z] Total : 5362.36 20.95 0.00 0.00 23836.80 4957.94 28265.96 00:19:17.463 { 00:19:17.463 "results": [ 00:19:17.463 { 00:19:17.463 "job": "TLSTESTn1", 00:19:17.463 "core_mask": "0x4", 00:19:17.463 "workload": "verify", 00:19:17.463 "status": "finished", 00:19:17.463 "verify_range": { 00:19:17.463 "start": 0, 00:19:17.463 "length": 8192 00:19:17.463 }, 00:19:17.463 "queue_depth": 128, 00:19:17.463 "io_size": 4096, 00:19:17.463 "runtime": 10.01256, 00:19:17.463 "iops": 5362.364869723627, 00:19:17.463 "mibps": 20.946737772357917, 00:19:17.463 "io_failed": 0, 00:19:17.463 "io_timeout": 0, 00:19:17.463 "avg_latency_us": 23836.803171254513, 00:19:17.463 "min_latency_us": 4957.940869565217, 00:19:17.463 "max_latency_us": 28265.961739130435 00:19:17.463 } 00:19:17.463 ], 00:19:17.463 "core_count": 1 00:19:17.463 } 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:17.463 nvmf_trace.0 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1491037 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1491037 ']' 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1491037 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1491037 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1491037' 00:19:17.463 killing process with pid 1491037 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1491037 00:19:17.463 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.463 00:19:17.463 Latency(us) 00:19:17.463 [2024-11-17T13:29:06.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.463 [2024-11-17T13:29:06.688Z] =================================================================================================================== 00:19:17.463 [2024-11-17T13:29:06.688Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.463 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1491037 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:17.724 rmmod nvme_tcp 00:19:17.724 rmmod nvme_fabrics 00:19:17.724 rmmod nvme_keyring 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1490787 ']' 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1490787 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1490787 ']' 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1490787 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1490787 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1490787' 00:19:17.724 killing process with pid 1490787 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1490787 00:19:17.724 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1490787 00:19:17.984 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:17.984 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:17.984 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:17.984 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:17.984 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:17.984 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:17.984 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:17.984 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:17.984 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:17.984 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.984 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.984 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Iih 00:19:20.523 00:19:20.523 real 0m21.726s 00:19:20.523 user 0m23.552s 00:19:20.523 sys 0m9.615s 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:20.523 ************************************ 00:19:20.523 END TEST nvmf_fips 00:19:20.523 ************************************ 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:20.523 ************************************ 00:19:20.523 START TEST nvmf_control_msg_list 00:19:20.523 ************************************ 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:20.523 * Looking for test storage... 00:19:20.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:20.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.523 --rc genhtml_branch_coverage=1 00:19:20.523 --rc genhtml_function_coverage=1 00:19:20.523 --rc genhtml_legend=1 00:19:20.523 --rc geninfo_all_blocks=1 00:19:20.523 --rc geninfo_unexecuted_blocks=1 00:19:20.523 00:19:20.523 ' 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:20.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.523 --rc genhtml_branch_coverage=1 00:19:20.523 --rc genhtml_function_coverage=1 00:19:20.523 --rc genhtml_legend=1 00:19:20.523 --rc geninfo_all_blocks=1 00:19:20.523 --rc geninfo_unexecuted_blocks=1 00:19:20.523 00:19:20.523 ' 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:20.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.523 --rc genhtml_branch_coverage=1 00:19:20.523 --rc genhtml_function_coverage=1 00:19:20.523 --rc genhtml_legend=1 00:19:20.523 --rc geninfo_all_blocks=1 00:19:20.523 --rc geninfo_unexecuted_blocks=1 00:19:20.523 00:19:20.523 ' 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:20.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.523 --rc genhtml_branch_coverage=1 00:19:20.523 --rc genhtml_function_coverage=1 00:19:20.523 --rc genhtml_legend=1 00:19:20.523 --rc geninfo_all_blocks=1 00:19:20.523 --rc geninfo_unexecuted_blocks=1 00:19:20.523 00:19:20.523 ' 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.523 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:20.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:20.524 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:27.097 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:27.097 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:27.097 Found net devices under 0000:86:00.0: cvl_0_0 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:27.097 Found net devices under 0000:86:00.1: cvl_0_1 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:27.097 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:27.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:27.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:19:27.098 00:19:27.098 --- 10.0.0.2 ping statistics --- 00:19:27.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.098 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:27.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:27.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:19:27.098 00:19:27.098 --- 10.0.0.1 ping statistics --- 00:19:27.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.098 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1496411 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1496411 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1496411 ']' 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:27.098 [2024-11-17 14:29:15.431560] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:19:27.098 [2024-11-17 14:29:15.431602] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.098 [2024-11-17 14:29:15.511511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.098 [2024-11-17 14:29:15.553171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.098 [2024-11-17 14:29:15.553206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.098 [2024-11-17 14:29:15.553214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.098 [2024-11-17 14:29:15.553220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.098 [2024-11-17 14:29:15.553225] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.098 [2024-11-17 14:29:15.553801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:27.098 [2024-11-17 14:29:15.697634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:27.098 Malloc0 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:27.098 [2024-11-17 14:29:15.746107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1496431 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1496432 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1496433 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1496431 00:19:27.098 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:27.098 [2024-11-17 14:29:15.826544] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:27.098 [2024-11-17 14:29:15.836592] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:27.098 [2024-11-17 14:29:15.836748] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:28.037 Initializing NVMe Controllers 00:19:28.037 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:28.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:28.037 Initialization complete. Launching workers. 00:19:28.037 ======================================================== 00:19:28.037 Latency(us) 00:19:28.037 Device Information : IOPS MiB/s Average min max 00:19:28.037 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40893.04 40817.71 40954.69 00:19:28.037 ======================================================== 00:19:28.037 Total : 25.00 0.10 40893.04 40817.71 40954.69 00:19:28.037 00:19:28.037 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1496432 00:19:28.037 Initializing NVMe Controllers 00:19:28.037 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:28.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:28.037 Initialization complete. Launching workers. 00:19:28.037 ======================================================== 00:19:28.037 Latency(us) 00:19:28.037 Device Information : IOPS MiB/s Average min max 00:19:28.037 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 41066.48 40838.69 42051.79 00:19:28.037 ======================================================== 00:19:28.037 Total : 25.00 0.10 41066.48 40838.69 42051.79 00:19:28.037 00:19:28.037 [2024-11-17 14:29:16.927426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5e400 is same with the state(6) to be set 00:19:28.037 Initializing NVMe Controllers 00:19:28.037 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:28.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:28.037 Initialization complete. Launching workers. 00:19:28.037 ======================================================== 00:19:28.037 Latency(us) 00:19:28.037 Device Information : IOPS MiB/s Average min max 00:19:28.037 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3906.00 15.26 255.63 124.02 509.22 00:19:28.037 ======================================================== 00:19:28.037 Total : 3906.00 15.26 255.63 124.02 509.22 00:19:28.037 00:19:28.037 [2024-11-17 14:29:17.020696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9ae90 is same with the state(6) to be set 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1496433 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:28.037 rmmod nvme_tcp 00:19:28.037 rmmod nvme_fabrics 00:19:28.037 rmmod nvme_keyring 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1496411 ']' 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1496411 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1496411 ']' 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1496411 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1496411 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1496411' 00:19:28.037 killing process with pid 1496411 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1496411 00:19:28.037 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1496411 00:19:28.297 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:28.297 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:28.297 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:28.297 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:28.297 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:28.297 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:28.297 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:28.297 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:28.297 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:28.297 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.297 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:28.297 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.301 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:30.301 00:19:30.301 real 0m10.169s 00:19:30.301 user 0m6.742s 00:19:30.301 sys 0m5.384s 00:19:30.301 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.301 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:30.301 ************************************ 00:19:30.301 END TEST nvmf_control_msg_list 00:19:30.301 ************************************ 00:19:30.301 14:29:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:30.301 14:29:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:30.301 14:29:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.301 14:29:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:30.301 ************************************ 00:19:30.301 START TEST nvmf_wait_for_buf 00:19:30.301 ************************************ 00:19:30.301 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:30.571 * Looking for test storage... 00:19:30.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:30.571 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:30.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.572 --rc genhtml_branch_coverage=1 00:19:30.572 --rc genhtml_function_coverage=1 00:19:30.572 --rc genhtml_legend=1 00:19:30.572 --rc geninfo_all_blocks=1 00:19:30.572 --rc geninfo_unexecuted_blocks=1 00:19:30.572 00:19:30.572 ' 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:30.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.572 --rc genhtml_branch_coverage=1 00:19:30.572 --rc genhtml_function_coverage=1 00:19:30.572 --rc genhtml_legend=1 00:19:30.572 --rc geninfo_all_blocks=1 00:19:30.572 --rc geninfo_unexecuted_blocks=1 00:19:30.572 00:19:30.572 ' 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:30.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.572 --rc genhtml_branch_coverage=1 00:19:30.572 --rc genhtml_function_coverage=1 00:19:30.572 --rc genhtml_legend=1 00:19:30.572 --rc geninfo_all_blocks=1 00:19:30.572 --rc geninfo_unexecuted_blocks=1 00:19:30.572 00:19:30.572 ' 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:30.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.572 --rc genhtml_branch_coverage=1 00:19:30.572 --rc genhtml_function_coverage=1 00:19:30.572 --rc genhtml_legend=1 00:19:30.572 --rc geninfo_all_blocks=1 00:19:30.572 --rc geninfo_unexecuted_blocks=1 00:19:30.572 00:19:30.572 ' 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:30.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:30.572 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.573 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.573 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.573 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:30.573 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:30.573 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:30.573 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:37.149 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:37.149 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:37.149 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:37.150 Found net devices under 0000:86:00.0: cvl_0_0 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:37.150 Found net devices under 0000:86:00.1: cvl_0_1 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:37.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:19:37.150 00:19:37.150 --- 10.0.0.2 ping statistics --- 00:19:37.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.150 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:37.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:19:37.150 00:19:37.150 --- 10.0.0.1 ping statistics --- 00:19:37.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.150 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1500194 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1500194 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1500194 ']' 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.150 [2024-11-17 14:29:25.704095] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:19:37.150 [2024-11-17 14:29:25.704137] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.150 [2024-11-17 14:29:25.782611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.150 [2024-11-17 14:29:25.820956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.150 [2024-11-17 14:29:25.820992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.150 [2024-11-17 14:29:25.820999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.150 [2024-11-17 14:29:25.821006] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.150 [2024-11-17 14:29:25.821011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.150 [2024-11-17 14:29:25.821564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:37.150 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.151 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.151 Malloc0 00:19:37.151 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.151 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:37.151 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.151 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.151 [2024-11-17 14:29:26.002950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.151 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.151 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:37.151 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.151 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.151 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.151 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:37.151 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.151 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.151 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.151 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:37.151 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.151 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.151 [2024-11-17 14:29:26.031140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.151 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.151 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:37.151 [2024-11-17 14:29:26.121341] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:38.530 Initializing NVMe Controllers 00:19:38.530 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:38.530 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:38.530 Initialization complete. Launching workers. 00:19:38.530 ======================================================== 00:19:38.530 Latency(us) 00:19:38.530 Device Information : IOPS MiB/s Average min max 00:19:38.530 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.00 15.87 32745.97 7249.18 63845.53 00:19:38.530 ======================================================== 00:19:38.530 Total : 127.00 15.87 32745.97 7249.18 63845.53 00:19:38.530 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2006 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2006 -eq 0 ]] 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:38.530 rmmod nvme_tcp 00:19:38.530 rmmod nvme_fabrics 00:19:38.530 rmmod nvme_keyring 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1500194 ']' 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1500194 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1500194 ']' 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1500194 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1500194 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1500194' 00:19:38.530 killing process with pid 1500194 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1500194 00:19:38.530 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1500194 00:19:38.790 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:38.790 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:38.790 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:38.790 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:38.790 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:38.790 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:38.790 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:38.790 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:38.790 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:38.790 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.790 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.790 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.706 14:29:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:40.706 00:19:40.706 real 0m10.472s 00:19:40.706 user 0m3.949s 00:19:40.706 sys 0m4.980s 00:19:40.706 14:29:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.706 14:29:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.706 ************************************ 00:19:40.706 END TEST nvmf_wait_for_buf 00:19:40.706 ************************************ 00:19:40.966 14:29:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:40.966 14:29:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:40.966 14:29:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:40.966 14:29:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:40.966 14:29:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:40.966 14:29:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:47.539 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:47.539 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:47.539 Found net devices under 0000:86:00.0: cvl_0_0 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:47.539 Found net devices under 0000:86:00.1: cvl_0_1 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:47.539 14:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:47.539 ************************************ 00:19:47.540 START TEST nvmf_perf_adq 00:19:47.540 ************************************ 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:47.540 * Looking for test storage... 00:19:47.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:47.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.540 --rc genhtml_branch_coverage=1 00:19:47.540 --rc genhtml_function_coverage=1 00:19:47.540 --rc genhtml_legend=1 00:19:47.540 --rc geninfo_all_blocks=1 00:19:47.540 --rc geninfo_unexecuted_blocks=1 00:19:47.540 00:19:47.540 ' 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:47.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.540 --rc genhtml_branch_coverage=1 00:19:47.540 --rc genhtml_function_coverage=1 00:19:47.540 --rc genhtml_legend=1 00:19:47.540 --rc geninfo_all_blocks=1 00:19:47.540 --rc geninfo_unexecuted_blocks=1 00:19:47.540 00:19:47.540 ' 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:47.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.540 --rc genhtml_branch_coverage=1 00:19:47.540 --rc genhtml_function_coverage=1 00:19:47.540 --rc genhtml_legend=1 00:19:47.540 --rc geninfo_all_blocks=1 00:19:47.540 --rc geninfo_unexecuted_blocks=1 00:19:47.540 00:19:47.540 ' 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:47.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.540 --rc genhtml_branch_coverage=1 00:19:47.540 --rc genhtml_function_coverage=1 00:19:47.540 --rc genhtml_legend=1 00:19:47.540 --rc geninfo_all_blocks=1 00:19:47.540 --rc geninfo_unexecuted_blocks=1 00:19:47.540 00:19:47.540 ' 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:47.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:47.540 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:47.541 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:47.541 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:47.541 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:52.819 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.819 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:52.820 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:52.820 Found net devices under 0000:86:00.0: cvl_0_0 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:52.820 Found net devices under 0000:86:00.1: cvl_0_1 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:52.820 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:53.388 14:29:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:55.926 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:01.204 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:01.204 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:01.204 Found net devices under 0000:86:00.0: cvl_0_0 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.204 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:01.220 Found net devices under 0000:86:00.1: cvl_0_1 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:01.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:20:01.220 00:20:01.220 --- 10.0.0.2 ping statistics --- 00:20:01.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.220 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:01.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:20:01.220 00:20:01.220 --- 10.0.0.1 ping statistics --- 00:20:01.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.220 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1508529 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1508529 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1508529 ']' 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.220 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.220 [2024-11-17 14:29:49.900129] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:20:01.220 [2024-11-17 14:29:49.900182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.220 [2024-11-17 14:29:49.978537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:01.220 [2024-11-17 14:29:50.029975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.220 [2024-11-17 14:29:50.030014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.220 [2024-11-17 14:29:50.030022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.220 [2024-11-17 14:29:50.030028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.220 [2024-11-17 14:29:50.030033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.220 [2024-11-17 14:29:50.031459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.220 [2024-11-17 14:29:50.031569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.220 [2024-11-17 14:29:50.031673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.220 [2024-11-17 14:29:50.031674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:01.220 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.220 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:01.220 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:01.220 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:01.220 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.220 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.220 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:01.220 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:01.220 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:01.220 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.220 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.220 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.220 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:01.220 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:01.220 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.220 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.221 [2024-11-17 14:29:50.232139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.221 Malloc1 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.221 [2024-11-17 14:29:50.299707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1508564 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:01.221 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:03.125 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:03.125 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.125 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.125 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.125 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:03.125 "tick_rate": 2300000000, 00:20:03.125 "poll_groups": [ 00:20:03.125 { 00:20:03.125 "name": "nvmf_tgt_poll_group_000", 00:20:03.125 "admin_qpairs": 1, 00:20:03.125 "io_qpairs": 1, 00:20:03.125 "current_admin_qpairs": 1, 00:20:03.125 "current_io_qpairs": 1, 00:20:03.125 "pending_bdev_io": 0, 00:20:03.125 "completed_nvme_io": 18718, 00:20:03.125 "transports": [ 00:20:03.125 { 00:20:03.125 "trtype": "TCP" 00:20:03.125 } 00:20:03.125 ] 00:20:03.125 }, 00:20:03.125 { 00:20:03.125 "name": "nvmf_tgt_poll_group_001", 00:20:03.125 "admin_qpairs": 0, 00:20:03.125 "io_qpairs": 1, 00:20:03.125 "current_admin_qpairs": 0, 00:20:03.125 "current_io_qpairs": 1, 00:20:03.125 "pending_bdev_io": 0, 00:20:03.125 "completed_nvme_io": 19046, 00:20:03.125 "transports": [ 00:20:03.125 { 00:20:03.125 "trtype": "TCP" 00:20:03.125 } 00:20:03.125 ] 00:20:03.125 }, 00:20:03.125 { 00:20:03.125 "name": "nvmf_tgt_poll_group_002", 00:20:03.125 "admin_qpairs": 0, 00:20:03.125 "io_qpairs": 1, 00:20:03.125 "current_admin_qpairs": 0, 00:20:03.125 "current_io_qpairs": 1, 00:20:03.125 "pending_bdev_io": 0, 00:20:03.125 "completed_nvme_io": 19174, 00:20:03.125 "transports": [ 00:20:03.125 { 00:20:03.125 "trtype": "TCP" 00:20:03.125 } 00:20:03.125 ] 00:20:03.125 }, 00:20:03.125 { 00:20:03.125 "name": "nvmf_tgt_poll_group_003", 00:20:03.125 "admin_qpairs": 0, 00:20:03.125 "io_qpairs": 1, 00:20:03.125 "current_admin_qpairs": 0, 00:20:03.125 "current_io_qpairs": 1, 00:20:03.125 "pending_bdev_io": 0, 00:20:03.125 "completed_nvme_io": 18659, 00:20:03.125 "transports": [ 00:20:03.125 { 00:20:03.125 "trtype": "TCP" 00:20:03.125 } 00:20:03.125 ] 00:20:03.125 } 00:20:03.125 ] 00:20:03.125 }' 00:20:03.125 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:03.125 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:03.383 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:03.384 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:03.384 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1508564 00:20:11.501 Initializing NVMe Controllers 00:20:11.502 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:11.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:11.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:11.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:11.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:11.502 Initialization complete. Launching workers. 00:20:11.502 ======================================================== 00:20:11.502 Latency(us) 00:20:11.502 Device Information : IOPS MiB/s Average min max 00:20:11.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10205.30 39.86 6272.00 2031.26 10760.35 00:20:11.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10201.70 39.85 6273.89 1785.25 10555.06 00:20:11.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9995.60 39.05 6404.25 2206.84 10776.94 00:20:11.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10040.70 39.22 6374.39 1732.07 11316.34 00:20:11.502 ======================================================== 00:20:11.502 Total : 40443.30 157.98 6330.58 1732.07 11316.34 00:20:11.502 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:11.502 rmmod nvme_tcp 00:20:11.502 rmmod nvme_fabrics 00:20:11.502 rmmod nvme_keyring 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1508529 ']' 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1508529 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1508529 ']' 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1508529 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1508529 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1508529' 00:20:11.502 killing process with pid 1508529 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1508529 00:20:11.502 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1508529 00:20:11.761 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:11.761 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:11.761 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:11.761 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:11.761 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:11.761 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:11.761 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:11.761 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:11.761 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:11.761 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.761 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.761 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.669 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:13.669 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:13.669 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:13.669 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:15.047 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:16.955 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:22.230 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:22.231 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:22.231 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:22.231 Found net devices under 0000:86:00.0: cvl_0_0 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:22.231 Found net devices under 0000:86:00.1: cvl_0_1 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:22.231 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:22.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:22.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:20:22.231 00:20:22.231 --- 10.0.0.2 ping statistics --- 00:20:22.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.231 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:22.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:22.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:20:22.231 00:20:22.231 --- 10.0.0.1 ping statistics --- 00:20:22.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.231 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:22.231 net.core.busy_poll = 1 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:22.231 net.core.busy_read = 1 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:22.231 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:22.491 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:22.491 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:22.491 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:22.491 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.491 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1512860 00:20:22.491 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1512860 00:20:22.491 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:22.491 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1512860 ']' 00:20:22.491 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.491 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.491 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.491 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.491 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.491 [2024-11-17 14:30:11.519259] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:20:22.491 [2024-11-17 14:30:11.519305] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.491 [2024-11-17 14:30:11.600183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:22.491 [2024-11-17 14:30:11.643785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.491 [2024-11-17 14:30:11.643822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.491 [2024-11-17 14:30:11.643829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.491 [2024-11-17 14:30:11.643836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.491 [2024-11-17 14:30:11.643841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.491 [2024-11-17 14:30:11.645259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.491 [2024-11-17 14:30:11.645386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.491 [2024-11-17 14:30:11.645447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.491 [2024-11-17 14:30:11.645448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:23.430 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.430 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:23.430 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:23.430 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:23.430 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.430 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.430 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:23.430 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:23.430 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:23.430 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.430 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.430 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.430 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:23.430 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:23.430 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.430 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.430 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.430 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.431 [2024-11-17 14:30:12.532985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.431 Malloc1 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.431 [2024-11-17 14:30:12.595186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1513109 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:23.431 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:25.969 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:25.969 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.969 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.969 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.969 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:25.969 "tick_rate": 2300000000, 00:20:25.969 "poll_groups": [ 00:20:25.969 { 00:20:25.969 "name": "nvmf_tgt_poll_group_000", 00:20:25.969 "admin_qpairs": 1, 00:20:25.969 "io_qpairs": 1, 00:20:25.969 "current_admin_qpairs": 1, 00:20:25.969 "current_io_qpairs": 1, 00:20:25.969 "pending_bdev_io": 0, 00:20:25.969 "completed_nvme_io": 28227, 00:20:25.969 "transports": [ 00:20:25.969 { 00:20:25.969 "trtype": "TCP" 00:20:25.969 } 00:20:25.969 ] 00:20:25.969 }, 00:20:25.969 { 00:20:25.969 "name": "nvmf_tgt_poll_group_001", 00:20:25.969 "admin_qpairs": 0, 00:20:25.969 "io_qpairs": 3, 00:20:25.969 "current_admin_qpairs": 0, 00:20:25.969 "current_io_qpairs": 3, 00:20:25.969 "pending_bdev_io": 0, 00:20:25.969 "completed_nvme_io": 29172, 00:20:25.969 "transports": [ 00:20:25.969 { 00:20:25.969 "trtype": "TCP" 00:20:25.969 } 00:20:25.969 ] 00:20:25.969 }, 00:20:25.969 { 00:20:25.969 "name": "nvmf_tgt_poll_group_002", 00:20:25.969 "admin_qpairs": 0, 00:20:25.969 "io_qpairs": 0, 00:20:25.969 "current_admin_qpairs": 0, 00:20:25.969 "current_io_qpairs": 0, 00:20:25.969 "pending_bdev_io": 0, 00:20:25.969 "completed_nvme_io": 0, 00:20:25.969 "transports": [ 00:20:25.969 { 00:20:25.969 "trtype": "TCP" 00:20:25.969 } 00:20:25.969 ] 00:20:25.969 }, 00:20:25.969 { 00:20:25.969 "name": "nvmf_tgt_poll_group_003", 00:20:25.969 "admin_qpairs": 0, 00:20:25.969 "io_qpairs": 0, 00:20:25.969 "current_admin_qpairs": 0, 00:20:25.969 "current_io_qpairs": 0, 00:20:25.969 "pending_bdev_io": 0, 00:20:25.969 "completed_nvme_io": 0, 00:20:25.969 "transports": [ 00:20:25.969 { 00:20:25.969 "trtype": "TCP" 00:20:25.969 } 00:20:25.969 ] 00:20:25.969 } 00:20:25.969 ] 00:20:25.969 }' 00:20:25.969 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:25.969 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:25.969 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:25.969 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:25.969 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1513109 00:20:34.091 Initializing NVMe Controllers 00:20:34.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:34.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:34.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:34.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:34.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:34.091 Initialization complete. Launching workers. 00:20:34.091 ======================================================== 00:20:34.091 Latency(us) 00:20:34.091 Device Information : IOPS MiB/s Average min max 00:20:34.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4772.19 18.64 13449.93 1620.10 57876.73 00:20:34.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15003.47 58.61 4264.89 1678.39 6874.55 00:20:34.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4985.09 19.47 12881.90 1785.16 62342.05 00:20:34.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5111.19 19.97 12565.25 1720.14 60791.41 00:20:34.091 ======================================================== 00:20:34.091 Total : 29871.95 116.69 8590.49 1620.10 62342.05 00:20:34.091 00:20:34.091 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:34.091 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:34.091 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:34.091 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:34.091 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:34.091 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:34.091 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:34.091 rmmod nvme_tcp 00:20:34.091 rmmod nvme_fabrics 00:20:34.091 rmmod nvme_keyring 00:20:34.091 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:34.092 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:34.092 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:34.092 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1512860 ']' 00:20:34.092 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1512860 00:20:34.092 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1512860 ']' 00:20:34.092 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1512860 00:20:34.092 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:34.092 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.092 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1512860 00:20:34.092 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:34.092 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:34.092 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1512860' 00:20:34.092 killing process with pid 1512860 00:20:34.092 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1512860 00:20:34.092 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1512860 00:20:34.092 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:34.092 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:34.092 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:34.092 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:34.092 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:34.092 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:34.092 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:34.092 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:34.092 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:34.092 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.092 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.092 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:37.384 00:20:37.384 real 0m50.529s 00:20:37.384 user 2m46.878s 00:20:37.384 sys 0m10.169s 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.384 ************************************ 00:20:37.384 END TEST nvmf_perf_adq 00:20:37.384 ************************************ 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:37.384 ************************************ 00:20:37.384 START TEST nvmf_shutdown 00:20:37.384 ************************************ 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:37.384 * Looking for test storage... 00:20:37.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:37.384 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:37.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.385 --rc genhtml_branch_coverage=1 00:20:37.385 --rc genhtml_function_coverage=1 00:20:37.385 --rc genhtml_legend=1 00:20:37.385 --rc geninfo_all_blocks=1 00:20:37.385 --rc geninfo_unexecuted_blocks=1 00:20:37.385 00:20:37.385 ' 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:37.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.385 --rc genhtml_branch_coverage=1 00:20:37.385 --rc genhtml_function_coverage=1 00:20:37.385 --rc genhtml_legend=1 00:20:37.385 --rc geninfo_all_blocks=1 00:20:37.385 --rc geninfo_unexecuted_blocks=1 00:20:37.385 00:20:37.385 ' 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:37.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.385 --rc genhtml_branch_coverage=1 00:20:37.385 --rc genhtml_function_coverage=1 00:20:37.385 --rc genhtml_legend=1 00:20:37.385 --rc geninfo_all_blocks=1 00:20:37.385 --rc geninfo_unexecuted_blocks=1 00:20:37.385 00:20:37.385 ' 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:37.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.385 --rc genhtml_branch_coverage=1 00:20:37.385 --rc genhtml_function_coverage=1 00:20:37.385 --rc genhtml_legend=1 00:20:37.385 --rc geninfo_all_blocks=1 00:20:37.385 --rc geninfo_unexecuted_blocks=1 00:20:37.385 00:20:37.385 ' 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:37.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:37.385 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:37.385 ************************************ 00:20:37.385 START TEST nvmf_shutdown_tc1 00:20:37.386 ************************************ 00:20:37.386 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:37.386 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:37.386 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:37.386 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:37.386 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.386 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:37.386 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:37.386 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:37.386 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.386 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.386 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.386 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:37.386 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:37.386 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:37.386 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:43.964 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:43.964 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.964 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:43.965 Found net devices under 0000:86:00.0: cvl_0_0 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:43.965 Found net devices under 0000:86:00.1: cvl_0_1 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:43.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:20:43.965 00:20:43.965 --- 10.0.0.2 ping statistics --- 00:20:43.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.965 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:43.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:20:43.965 00:20:43.965 --- 10.0.0.1 ping statistics --- 00:20:43.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.965 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1518556 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1518556 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1518556 ']' 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.965 [2024-11-17 14:30:32.526546] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:20:43.965 [2024-11-17 14:30:32.526595] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.965 [2024-11-17 14:30:32.606647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:43.965 [2024-11-17 14:30:32.648783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.965 [2024-11-17 14:30:32.648823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.965 [2024-11-17 14:30:32.648831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.965 [2024-11-17 14:30:32.648838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.965 [2024-11-17 14:30:32.648844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.965 [2024-11-17 14:30:32.650287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.965 [2024-11-17 14:30:32.650415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:43.965 [2024-11-17 14:30:32.650522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.965 [2024-11-17 14:30:32.650523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.965 [2024-11-17 14:30:32.794328] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.965 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.966 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.966 Malloc1 00:20:43.966 [2024-11-17 14:30:32.905933] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.966 Malloc2 00:20:43.966 Malloc3 00:20:43.966 Malloc4 00:20:43.966 Malloc5 00:20:43.966 Malloc6 00:20:43.966 Malloc7 00:20:44.261 Malloc8 00:20:44.261 Malloc9 00:20:44.261 Malloc10 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1518635 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1518635 /var/tmp/bdevperf.sock 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1518635 ']' 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:44.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.261 { 00:20:44.261 "params": { 00:20:44.261 "name": "Nvme$subsystem", 00:20:44.261 "trtype": "$TEST_TRANSPORT", 00:20:44.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.261 "adrfam": "ipv4", 00:20:44.261 "trsvcid": "$NVMF_PORT", 00:20:44.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.261 "hdgst": ${hdgst:-false}, 00:20:44.261 "ddgst": ${ddgst:-false} 00:20:44.261 }, 00:20:44.261 "method": "bdev_nvme_attach_controller" 00:20:44.261 } 00:20:44.261 EOF 00:20:44.261 )") 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.261 { 00:20:44.261 "params": { 00:20:44.261 "name": "Nvme$subsystem", 00:20:44.261 "trtype": "$TEST_TRANSPORT", 00:20:44.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.261 "adrfam": "ipv4", 00:20:44.261 "trsvcid": "$NVMF_PORT", 00:20:44.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.261 "hdgst": ${hdgst:-false}, 00:20:44.261 "ddgst": ${ddgst:-false} 00:20:44.261 }, 00:20:44.261 "method": "bdev_nvme_attach_controller" 00:20:44.261 } 00:20:44.261 EOF 00:20:44.261 )") 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.261 { 00:20:44.261 "params": { 00:20:44.261 "name": "Nvme$subsystem", 00:20:44.261 "trtype": "$TEST_TRANSPORT", 00:20:44.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.261 "adrfam": "ipv4", 00:20:44.261 "trsvcid": "$NVMF_PORT", 00:20:44.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.261 "hdgst": ${hdgst:-false}, 00:20:44.261 "ddgst": ${ddgst:-false} 00:20:44.261 }, 00:20:44.261 "method": "bdev_nvme_attach_controller" 00:20:44.261 } 00:20:44.261 EOF 00:20:44.261 )") 00:20:44.261 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:44.262 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.262 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.262 { 00:20:44.262 "params": { 00:20:44.262 "name": "Nvme$subsystem", 00:20:44.262 "trtype": "$TEST_TRANSPORT", 00:20:44.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.262 "adrfam": "ipv4", 00:20:44.262 "trsvcid": "$NVMF_PORT", 00:20:44.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.262 "hdgst": ${hdgst:-false}, 00:20:44.262 "ddgst": ${ddgst:-false} 00:20:44.262 }, 00:20:44.262 "method": "bdev_nvme_attach_controller" 00:20:44.262 } 00:20:44.262 EOF 00:20:44.262 )") 00:20:44.262 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:44.262 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.262 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.262 { 00:20:44.262 "params": { 00:20:44.262 "name": "Nvme$subsystem", 00:20:44.262 "trtype": "$TEST_TRANSPORT", 00:20:44.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.262 "adrfam": "ipv4", 00:20:44.262 "trsvcid": "$NVMF_PORT", 00:20:44.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.262 "hdgst": ${hdgst:-false}, 00:20:44.262 "ddgst": ${ddgst:-false} 00:20:44.262 }, 00:20:44.262 "method": "bdev_nvme_attach_controller" 00:20:44.262 } 00:20:44.262 EOF 00:20:44.262 )") 00:20:44.262 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:44.262 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.262 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.262 { 00:20:44.262 "params": { 00:20:44.262 "name": "Nvme$subsystem", 00:20:44.262 "trtype": "$TEST_TRANSPORT", 00:20:44.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.262 "adrfam": "ipv4", 00:20:44.262 "trsvcid": "$NVMF_PORT", 00:20:44.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.262 "hdgst": ${hdgst:-false}, 00:20:44.262 "ddgst": ${ddgst:-false} 00:20:44.262 }, 00:20:44.262 "method": "bdev_nvme_attach_controller" 00:20:44.262 } 00:20:44.262 EOF 00:20:44.262 )") 00:20:44.262 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:44.262 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.262 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.262 { 00:20:44.262 "params": { 00:20:44.262 "name": "Nvme$subsystem", 00:20:44.262 "trtype": "$TEST_TRANSPORT", 00:20:44.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.262 "adrfam": "ipv4", 00:20:44.262 "trsvcid": "$NVMF_PORT", 00:20:44.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.262 "hdgst": ${hdgst:-false}, 00:20:44.262 "ddgst": ${ddgst:-false} 00:20:44.262 }, 00:20:44.262 "method": "bdev_nvme_attach_controller" 00:20:44.262 } 00:20:44.262 EOF 00:20:44.262 )") 00:20:44.262 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:44.262 [2024-11-17 14:30:33.380473] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:20:44.262 [2024-11-17 14:30:33.380525] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:44.262 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.262 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.262 { 00:20:44.262 "params": { 00:20:44.262 "name": "Nvme$subsystem", 00:20:44.262 "trtype": "$TEST_TRANSPORT", 00:20:44.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.262 "adrfam": "ipv4", 00:20:44.262 "trsvcid": "$NVMF_PORT", 00:20:44.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.262 "hdgst": ${hdgst:-false}, 00:20:44.262 "ddgst": ${ddgst:-false} 00:20:44.262 }, 00:20:44.262 "method": "bdev_nvme_attach_controller" 00:20:44.262 } 00:20:44.262 EOF 00:20:44.262 )") 00:20:44.262 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:44.262 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.262 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.262 { 00:20:44.262 "params": { 00:20:44.262 "name": "Nvme$subsystem", 00:20:44.262 "trtype": "$TEST_TRANSPORT", 00:20:44.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.262 "adrfam": "ipv4", 00:20:44.262 "trsvcid": "$NVMF_PORT", 00:20:44.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.262 "hdgst": ${hdgst:-false}, 00:20:44.262 "ddgst": ${ddgst:-false} 00:20:44.262 }, 00:20:44.262 "method": "bdev_nvme_attach_controller" 00:20:44.262 } 00:20:44.262 EOF 00:20:44.262 )") 00:20:44.262 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:44.262 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.262 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.262 { 00:20:44.262 "params": { 00:20:44.262 "name": "Nvme$subsystem", 00:20:44.262 "trtype": "$TEST_TRANSPORT", 00:20:44.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.262 "adrfam": "ipv4", 00:20:44.262 "trsvcid": "$NVMF_PORT", 00:20:44.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.263 "hdgst": ${hdgst:-false}, 00:20:44.263 "ddgst": ${ddgst:-false} 00:20:44.263 }, 00:20:44.263 "method": "bdev_nvme_attach_controller" 00:20:44.263 } 00:20:44.263 EOF 00:20:44.263 )") 00:20:44.263 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:44.263 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:44.263 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:44.263 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:44.263 "params": { 00:20:44.263 "name": "Nvme1", 00:20:44.263 "trtype": "tcp", 00:20:44.263 "traddr": "10.0.0.2", 00:20:44.263 "adrfam": "ipv4", 00:20:44.263 "trsvcid": "4420", 00:20:44.263 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.263 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:44.263 "hdgst": false, 00:20:44.263 "ddgst": false 00:20:44.263 }, 00:20:44.263 "method": "bdev_nvme_attach_controller" 00:20:44.263 },{ 00:20:44.263 "params": { 00:20:44.263 "name": "Nvme2", 00:20:44.263 "trtype": "tcp", 00:20:44.263 "traddr": "10.0.0.2", 00:20:44.263 "adrfam": "ipv4", 00:20:44.263 "trsvcid": "4420", 00:20:44.263 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:44.263 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:44.263 "hdgst": false, 00:20:44.263 "ddgst": false 00:20:44.263 }, 00:20:44.263 "method": "bdev_nvme_attach_controller" 00:20:44.263 },{ 00:20:44.263 "params": { 00:20:44.263 "name": "Nvme3", 00:20:44.263 "trtype": "tcp", 00:20:44.263 "traddr": "10.0.0.2", 00:20:44.263 "adrfam": "ipv4", 00:20:44.263 "trsvcid": "4420", 00:20:44.263 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:44.263 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:44.263 "hdgst": false, 00:20:44.263 "ddgst": false 00:20:44.263 }, 00:20:44.263 "method": "bdev_nvme_attach_controller" 00:20:44.263 },{ 00:20:44.263 "params": { 00:20:44.263 "name": "Nvme4", 00:20:44.263 "trtype": "tcp", 00:20:44.263 "traddr": "10.0.0.2", 00:20:44.263 "adrfam": "ipv4", 00:20:44.263 "trsvcid": "4420", 00:20:44.263 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:44.263 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:44.263 "hdgst": false, 00:20:44.263 "ddgst": false 00:20:44.263 }, 00:20:44.263 "method": "bdev_nvme_attach_controller" 00:20:44.263 },{ 00:20:44.263 "params": { 00:20:44.263 "name": "Nvme5", 00:20:44.263 "trtype": "tcp", 00:20:44.263 "traddr": "10.0.0.2", 00:20:44.263 "adrfam": "ipv4", 00:20:44.263 "trsvcid": "4420", 00:20:44.263 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:44.263 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:44.263 "hdgst": false, 00:20:44.263 "ddgst": false 00:20:44.263 }, 00:20:44.263 "method": "bdev_nvme_attach_controller" 00:20:44.263 },{ 00:20:44.263 "params": { 00:20:44.263 "name": "Nvme6", 00:20:44.263 "trtype": "tcp", 00:20:44.263 "traddr": "10.0.0.2", 00:20:44.263 "adrfam": "ipv4", 00:20:44.263 "trsvcid": "4420", 00:20:44.263 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:44.263 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:44.263 "hdgst": false, 00:20:44.263 "ddgst": false 00:20:44.263 }, 00:20:44.263 "method": "bdev_nvme_attach_controller" 00:20:44.263 },{ 00:20:44.263 "params": { 00:20:44.263 "name": "Nvme7", 00:20:44.263 "trtype": "tcp", 00:20:44.263 "traddr": "10.0.0.2", 00:20:44.263 "adrfam": "ipv4", 00:20:44.263 "trsvcid": "4420", 00:20:44.263 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:44.263 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:44.263 "hdgst": false, 00:20:44.263 "ddgst": false 00:20:44.263 }, 00:20:44.263 "method": "bdev_nvme_attach_controller" 00:20:44.263 },{ 00:20:44.263 "params": { 00:20:44.263 "name": "Nvme8", 00:20:44.263 "trtype": "tcp", 00:20:44.263 "traddr": "10.0.0.2", 00:20:44.263 "adrfam": "ipv4", 00:20:44.263 "trsvcid": "4420", 00:20:44.263 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:44.263 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:44.263 "hdgst": false, 00:20:44.263 "ddgst": false 00:20:44.263 }, 00:20:44.263 "method": "bdev_nvme_attach_controller" 00:20:44.263 },{ 00:20:44.263 "params": { 00:20:44.263 "name": "Nvme9", 00:20:44.263 "trtype": "tcp", 00:20:44.263 "traddr": "10.0.0.2", 00:20:44.263 "adrfam": "ipv4", 00:20:44.263 "trsvcid": "4420", 00:20:44.263 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:44.263 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:44.263 "hdgst": false, 00:20:44.263 "ddgst": false 00:20:44.263 }, 00:20:44.263 "method": "bdev_nvme_attach_controller" 00:20:44.263 },{ 00:20:44.263 "params": { 00:20:44.263 "name": "Nvme10", 00:20:44.263 "trtype": "tcp", 00:20:44.263 "traddr": "10.0.0.2", 00:20:44.263 "adrfam": "ipv4", 00:20:44.263 "trsvcid": "4420", 00:20:44.263 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:44.263 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:44.263 "hdgst": false, 00:20:44.263 "ddgst": false 00:20:44.263 }, 00:20:44.263 "method": "bdev_nvme_attach_controller" 00:20:44.263 }' 00:20:44.263 [2024-11-17 14:30:33.456398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.587 [2024-11-17 14:30:33.498941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.489 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.489 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:46.489 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:46.489 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.489 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:46.489 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.489 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1518635 00:20:46.489 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:46.489 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:47.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1518635 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:47.055 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1518556 00:20:47.055 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:47.055 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:47.055 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:47.055 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:47.055 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.055 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.055 { 00:20:47.055 "params": { 00:20:47.055 "name": "Nvme$subsystem", 00:20:47.055 "trtype": "$TEST_TRANSPORT", 00:20:47.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.055 "adrfam": "ipv4", 00:20:47.055 "trsvcid": "$NVMF_PORT", 00:20:47.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.055 "hdgst": ${hdgst:-false}, 00:20:47.055 "ddgst": ${ddgst:-false} 00:20:47.055 }, 00:20:47.055 "method": "bdev_nvme_attach_controller" 00:20:47.055 } 00:20:47.055 EOF 00:20:47.055 )") 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.315 { 00:20:47.315 "params": { 00:20:47.315 "name": "Nvme$subsystem", 00:20:47.315 "trtype": "$TEST_TRANSPORT", 00:20:47.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.315 "adrfam": "ipv4", 00:20:47.315 "trsvcid": "$NVMF_PORT", 00:20:47.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.315 "hdgst": ${hdgst:-false}, 00:20:47.315 "ddgst": ${ddgst:-false} 00:20:47.315 }, 00:20:47.315 "method": "bdev_nvme_attach_controller" 00:20:47.315 } 00:20:47.315 EOF 00:20:47.315 )") 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.315 { 00:20:47.315 "params": { 00:20:47.315 "name": "Nvme$subsystem", 00:20:47.315 "trtype": "$TEST_TRANSPORT", 00:20:47.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.315 "adrfam": "ipv4", 00:20:47.315 "trsvcid": "$NVMF_PORT", 00:20:47.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.315 "hdgst": ${hdgst:-false}, 00:20:47.315 "ddgst": ${ddgst:-false} 00:20:47.315 }, 00:20:47.315 "method": "bdev_nvme_attach_controller" 00:20:47.315 } 00:20:47.315 EOF 00:20:47.315 )") 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.315 { 00:20:47.315 "params": { 00:20:47.315 "name": "Nvme$subsystem", 00:20:47.315 "trtype": "$TEST_TRANSPORT", 00:20:47.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.315 "adrfam": "ipv4", 00:20:47.315 "trsvcid": "$NVMF_PORT", 00:20:47.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.315 "hdgst": ${hdgst:-false}, 00:20:47.315 "ddgst": ${ddgst:-false} 00:20:47.315 }, 00:20:47.315 "method": "bdev_nvme_attach_controller" 00:20:47.315 } 00:20:47.315 EOF 00:20:47.315 )") 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.315 { 00:20:47.315 "params": { 00:20:47.315 "name": "Nvme$subsystem", 00:20:47.315 "trtype": "$TEST_TRANSPORT", 00:20:47.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.315 "adrfam": "ipv4", 00:20:47.315 "trsvcid": "$NVMF_PORT", 00:20:47.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.315 "hdgst": ${hdgst:-false}, 00:20:47.315 "ddgst": ${ddgst:-false} 00:20:47.315 }, 00:20:47.315 "method": "bdev_nvme_attach_controller" 00:20:47.315 } 00:20:47.315 EOF 00:20:47.315 )") 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.315 { 00:20:47.315 "params": { 00:20:47.315 "name": "Nvme$subsystem", 00:20:47.315 "trtype": "$TEST_TRANSPORT", 00:20:47.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.315 "adrfam": "ipv4", 00:20:47.315 "trsvcid": "$NVMF_PORT", 00:20:47.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.315 "hdgst": ${hdgst:-false}, 00:20:47.315 "ddgst": ${ddgst:-false} 00:20:47.315 }, 00:20:47.315 "method": "bdev_nvme_attach_controller" 00:20:47.315 } 00:20:47.315 EOF 00:20:47.315 )") 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.315 { 00:20:47.315 "params": { 00:20:47.315 "name": "Nvme$subsystem", 00:20:47.315 "trtype": "$TEST_TRANSPORT", 00:20:47.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.315 "adrfam": "ipv4", 00:20:47.315 "trsvcid": "$NVMF_PORT", 00:20:47.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.315 "hdgst": ${hdgst:-false}, 00:20:47.315 "ddgst": ${ddgst:-false} 00:20:47.315 }, 00:20:47.315 "method": "bdev_nvme_attach_controller" 00:20:47.315 } 00:20:47.315 EOF 00:20:47.315 )") 00:20:47.315 [2024-11-17 14:30:36.319065] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.315 [2024-11-17 14:30:36.319115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1519186 ] 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.315 { 00:20:47.315 "params": { 00:20:47.315 "name": "Nvme$subsystem", 00:20:47.315 "trtype": "$TEST_TRANSPORT", 00:20:47.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.315 "adrfam": "ipv4", 00:20:47.315 "trsvcid": "$NVMF_PORT", 00:20:47.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.315 "hdgst": ${hdgst:-false}, 00:20:47.315 "ddgst": ${ddgst:-false} 00:20:47.315 }, 00:20:47.315 "method": "bdev_nvme_attach_controller" 00:20:47.315 } 00:20:47.315 EOF 00:20:47.315 )") 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.315 { 00:20:47.315 "params": { 00:20:47.315 "name": "Nvme$subsystem", 00:20:47.315 "trtype": "$TEST_TRANSPORT", 00:20:47.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.315 "adrfam": "ipv4", 00:20:47.315 "trsvcid": "$NVMF_PORT", 00:20:47.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.315 "hdgst": ${hdgst:-false}, 00:20:47.315 "ddgst": ${ddgst:-false} 00:20:47.315 }, 00:20:47.315 "method": "bdev_nvme_attach_controller" 00:20:47.315 } 00:20:47.315 EOF 00:20:47.315 )") 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.315 { 00:20:47.315 "params": { 00:20:47.315 "name": "Nvme$subsystem", 00:20:47.315 "trtype": "$TEST_TRANSPORT", 00:20:47.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.315 "adrfam": "ipv4", 00:20:47.315 "trsvcid": "$NVMF_PORT", 00:20:47.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.315 "hdgst": ${hdgst:-false}, 00:20:47.315 "ddgst": ${ddgst:-false} 00:20:47.315 }, 00:20:47.315 "method": "bdev_nvme_attach_controller" 00:20:47.315 } 00:20:47.315 EOF 00:20:47.315 )") 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:47.315 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:47.315 "params": { 00:20:47.315 "name": "Nvme1", 00:20:47.315 "trtype": "tcp", 00:20:47.315 "traddr": "10.0.0.2", 00:20:47.315 "adrfam": "ipv4", 00:20:47.315 "trsvcid": "4420", 00:20:47.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.315 "hdgst": false, 00:20:47.315 "ddgst": false 00:20:47.315 }, 00:20:47.315 "method": "bdev_nvme_attach_controller" 00:20:47.315 },{ 00:20:47.315 "params": { 00:20:47.315 "name": "Nvme2", 00:20:47.315 "trtype": "tcp", 00:20:47.315 "traddr": "10.0.0.2", 00:20:47.315 "adrfam": "ipv4", 00:20:47.315 "trsvcid": "4420", 00:20:47.315 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:47.315 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:47.315 "hdgst": false, 00:20:47.315 "ddgst": false 00:20:47.315 }, 00:20:47.315 "method": "bdev_nvme_attach_controller" 00:20:47.315 },{ 00:20:47.315 "params": { 00:20:47.315 "name": "Nvme3", 00:20:47.315 "trtype": "tcp", 00:20:47.315 "traddr": "10.0.0.2", 00:20:47.315 "adrfam": "ipv4", 00:20:47.315 "trsvcid": "4420", 00:20:47.315 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:47.315 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:47.315 "hdgst": false, 00:20:47.315 "ddgst": false 00:20:47.315 }, 00:20:47.315 "method": "bdev_nvme_attach_controller" 00:20:47.315 },{ 00:20:47.315 "params": { 00:20:47.315 "name": "Nvme4", 00:20:47.315 "trtype": "tcp", 00:20:47.315 "traddr": "10.0.0.2", 00:20:47.315 "adrfam": "ipv4", 00:20:47.315 "trsvcid": "4420", 00:20:47.315 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:47.315 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:47.315 "hdgst": false, 00:20:47.315 "ddgst": false 00:20:47.315 }, 00:20:47.315 "method": "bdev_nvme_attach_controller" 00:20:47.315 },{ 00:20:47.315 "params": { 00:20:47.315 "name": "Nvme5", 00:20:47.315 "trtype": "tcp", 00:20:47.315 "traddr": "10.0.0.2", 00:20:47.315 "adrfam": "ipv4", 00:20:47.315 "trsvcid": "4420", 00:20:47.315 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:47.315 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:47.315 "hdgst": false, 00:20:47.315 "ddgst": false 00:20:47.315 }, 00:20:47.315 "method": "bdev_nvme_attach_controller" 00:20:47.315 },{ 00:20:47.315 "params": { 00:20:47.315 "name": "Nvme6", 00:20:47.315 "trtype": "tcp", 00:20:47.315 "traddr": "10.0.0.2", 00:20:47.315 "adrfam": "ipv4", 00:20:47.315 "trsvcid": "4420", 00:20:47.315 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:47.315 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:47.315 "hdgst": false, 00:20:47.315 "ddgst": false 00:20:47.315 }, 00:20:47.315 "method": "bdev_nvme_attach_controller" 00:20:47.315 },{ 00:20:47.315 "params": { 00:20:47.315 "name": "Nvme7", 00:20:47.315 "trtype": "tcp", 00:20:47.315 "traddr": "10.0.0.2", 00:20:47.315 "adrfam": "ipv4", 00:20:47.315 "trsvcid": "4420", 00:20:47.315 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:47.315 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:47.315 "hdgst": false, 00:20:47.315 "ddgst": false 00:20:47.315 }, 00:20:47.315 "method": "bdev_nvme_attach_controller" 00:20:47.315 },{ 00:20:47.316 "params": { 00:20:47.316 "name": "Nvme8", 00:20:47.316 "trtype": "tcp", 00:20:47.316 "traddr": "10.0.0.2", 00:20:47.316 "adrfam": "ipv4", 00:20:47.316 "trsvcid": "4420", 00:20:47.316 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:47.316 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:47.316 "hdgst": false, 00:20:47.316 "ddgst": false 00:20:47.316 }, 00:20:47.316 "method": "bdev_nvme_attach_controller" 00:20:47.316 },{ 00:20:47.316 "params": { 00:20:47.316 "name": "Nvme9", 00:20:47.316 "trtype": "tcp", 00:20:47.316 "traddr": "10.0.0.2", 00:20:47.316 "adrfam": "ipv4", 00:20:47.316 "trsvcid": "4420", 00:20:47.316 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:47.316 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:47.316 "hdgst": false, 00:20:47.316 "ddgst": false 00:20:47.316 }, 00:20:47.316 "method": "bdev_nvme_attach_controller" 00:20:47.316 },{ 00:20:47.316 "params": { 00:20:47.316 "name": "Nvme10", 00:20:47.316 "trtype": "tcp", 00:20:47.316 "traddr": "10.0.0.2", 00:20:47.316 "adrfam": "ipv4", 00:20:47.316 "trsvcid": "4420", 00:20:47.316 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:47.316 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:47.316 "hdgst": false, 00:20:47.316 "ddgst": false 00:20:47.316 }, 00:20:47.316 "method": "bdev_nvme_attach_controller" 00:20:47.316 }' 00:20:47.316 [2024-11-17 14:30:36.398520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.316 [2024-11-17 14:30:36.440517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.217 Running I/O for 1 seconds... 00:20:50.043 2195.00 IOPS, 137.19 MiB/s 00:20:50.043 Latency(us) 00:20:50.043 [2024-11-17T13:30:39.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.043 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.043 Verification LBA range: start 0x0 length 0x400 00:20:50.043 Nvme1n1 : 1.14 286.11 17.88 0.00 0.00 221082.63 3291.05 218833.25 00:20:50.043 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.043 Verification LBA range: start 0x0 length 0x400 00:20:50.043 Nvme2n1 : 1.14 279.73 17.48 0.00 0.00 221978.98 15500.69 218833.25 00:20:50.043 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.043 Verification LBA range: start 0x0 length 0x400 00:20:50.043 Nvme3n1 : 1.14 280.68 17.54 0.00 0.00 219290.94 14588.88 225215.89 00:20:50.043 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.043 Verification LBA range: start 0x0 length 0x400 00:20:50.043 Nvme4n1 : 1.12 293.04 18.32 0.00 0.00 202813.26 9915.88 205156.17 00:20:50.043 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.043 Verification LBA range: start 0x0 length 0x400 00:20:50.043 Nvme5n1 : 1.10 233.11 14.57 0.00 0.00 255798.32 18578.03 230686.72 00:20:50.043 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.043 Verification LBA range: start 0x0 length 0x400 00:20:50.043 Nvme6n1 : 1.09 234.29 14.64 0.00 0.00 250392.04 17210.32 225215.89 00:20:50.043 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.043 Verification LBA range: start 0x0 length 0x400 00:20:50.043 Nvme7n1 : 1.12 284.52 17.78 0.00 0.00 203588.83 18122.13 222480.47 00:20:50.043 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.043 Verification LBA range: start 0x0 length 0x400 00:20:50.043 Nvme8n1 : 1.15 279.24 17.45 0.00 0.00 204523.25 15728.64 227951.30 00:20:50.043 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.043 Verification LBA range: start 0x0 length 0x400 00:20:50.043 Nvme9n1 : 1.15 278.35 17.40 0.00 0.00 202160.22 15614.66 230686.72 00:20:50.043 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.043 Verification LBA range: start 0x0 length 0x400 00:20:50.043 Nvme10n1 : 1.15 277.42 17.34 0.00 0.00 199774.03 14588.88 242540.19 00:20:50.043 [2024-11-17T13:30:39.268Z] =================================================================================================================== 00:20:50.043 [2024-11-17T13:30:39.268Z] Total : 2726.49 170.41 0.00 0.00 216650.47 3291.05 242540.19 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:50.302 rmmod nvme_tcp 00:20:50.302 rmmod nvme_fabrics 00:20:50.302 rmmod nvme_keyring 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1518556 ']' 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1518556 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1518556 ']' 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1518556 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1518556 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1518556' 00:20:50.302 killing process with pid 1518556 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1518556 00:20:50.302 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1518556 00:20:50.561 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:50.561 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:50.561 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:50.561 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:50.561 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:50.819 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:50.819 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:50.819 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:50.819 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:50.820 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.820 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.820 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:52.727 00:20:52.727 real 0m15.380s 00:20:52.727 user 0m34.522s 00:20:52.727 sys 0m5.820s 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:52.727 ************************************ 00:20:52.727 END TEST nvmf_shutdown_tc1 00:20:52.727 ************************************ 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:52.727 ************************************ 00:20:52.727 START TEST nvmf_shutdown_tc2 00:20:52.727 ************************************ 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:52.727 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:52.987 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:52.987 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:52.987 Found net devices under 0000:86:00.0: cvl_0_0 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:52.987 Found net devices under 0000:86:00.1: cvl_0_1 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.987 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:52.988 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:52.988 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.988 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.988 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.988 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.988 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:52.988 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.988 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.988 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.988 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:52.988 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:52.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:20:52.988 00:20:52.988 --- 10.0.0.2 ping statistics --- 00:20:52.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.988 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:20:52.988 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:20:52.988 00:20:52.988 --- 10.0.0.1 ping statistics --- 00:20:52.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.988 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:20:52.988 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.988 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:52.988 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:52.988 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.988 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:52.988 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:52.988 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.988 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:52.988 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:53.247 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:53.247 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:53.247 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:53.247 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.247 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1520354 00:20:53.247 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1520354 00:20:53.247 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1520354 ']' 00:20:53.247 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.247 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:53.247 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.247 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.247 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.247 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.247 [2024-11-17 14:30:42.299763] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:20:53.247 [2024-11-17 14:30:42.299810] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.247 [2024-11-17 14:30:42.378219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:53.247 [2024-11-17 14:30:42.420208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.247 [2024-11-17 14:30:42.420245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.247 [2024-11-17 14:30:42.420253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.247 [2024-11-17 14:30:42.420259] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.247 [2024-11-17 14:30:42.420264] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.247 [2024-11-17 14:30:42.424371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.247 [2024-11-17 14:30:42.424459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:53.247 [2024-11-17 14:30:42.424565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.247 [2024-11-17 14:30:42.424566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:53.506 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.506 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:53.506 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:53.506 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:53.506 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.506 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.506 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:53.506 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.506 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.506 [2024-11-17 14:30:42.560336] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.506 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.506 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:53.506 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:53.506 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.507 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.507 Malloc1 00:20:53.507 [2024-11-17 14:30:42.672091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.507 Malloc2 00:20:53.765 Malloc3 00:20:53.765 Malloc4 00:20:53.765 Malloc5 00:20:53.765 Malloc6 00:20:53.765 Malloc7 00:20:53.765 Malloc8 00:20:54.025 Malloc9 00:20:54.025 Malloc10 00:20:54.025 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.025 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:54.025 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:54.025 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:54.025 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1520409 00:20:54.025 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1520409 /var/tmp/bdevperf.sock 00:20:54.025 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1520409 ']' 00:20:54.025 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.025 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:54.025 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:54.025 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.025 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.025 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:54.025 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.025 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:54.025 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:54.025 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.025 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.025 { 00:20:54.025 "params": { 00:20:54.025 "name": "Nvme$subsystem", 00:20:54.025 "trtype": "$TEST_TRANSPORT", 00:20:54.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.025 "adrfam": "ipv4", 00:20:54.025 "trsvcid": "$NVMF_PORT", 00:20:54.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.025 "hdgst": ${hdgst:-false}, 00:20:54.025 "ddgst": ${ddgst:-false} 00:20:54.025 }, 00:20:54.025 "method": "bdev_nvme_attach_controller" 00:20:54.025 } 00:20:54.026 EOF 00:20:54.026 )") 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.026 { 00:20:54.026 "params": { 00:20:54.026 "name": "Nvme$subsystem", 00:20:54.026 "trtype": "$TEST_TRANSPORT", 00:20:54.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.026 "adrfam": "ipv4", 00:20:54.026 "trsvcid": "$NVMF_PORT", 00:20:54.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.026 "hdgst": ${hdgst:-false}, 00:20:54.026 "ddgst": ${ddgst:-false} 00:20:54.026 }, 00:20:54.026 "method": "bdev_nvme_attach_controller" 00:20:54.026 } 00:20:54.026 EOF 00:20:54.026 )") 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.026 { 00:20:54.026 "params": { 00:20:54.026 "name": "Nvme$subsystem", 00:20:54.026 "trtype": "$TEST_TRANSPORT", 00:20:54.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.026 "adrfam": "ipv4", 00:20:54.026 "trsvcid": "$NVMF_PORT", 00:20:54.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.026 "hdgst": ${hdgst:-false}, 00:20:54.026 "ddgst": ${ddgst:-false} 00:20:54.026 }, 00:20:54.026 "method": "bdev_nvme_attach_controller" 00:20:54.026 } 00:20:54.026 EOF 00:20:54.026 )") 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.026 { 00:20:54.026 "params": { 00:20:54.026 "name": "Nvme$subsystem", 00:20:54.026 "trtype": "$TEST_TRANSPORT", 00:20:54.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.026 "adrfam": "ipv4", 00:20:54.026 "trsvcid": "$NVMF_PORT", 00:20:54.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.026 "hdgst": ${hdgst:-false}, 00:20:54.026 "ddgst": ${ddgst:-false} 00:20:54.026 }, 00:20:54.026 "method": "bdev_nvme_attach_controller" 00:20:54.026 } 00:20:54.026 EOF 00:20:54.026 )") 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.026 { 00:20:54.026 "params": { 00:20:54.026 "name": "Nvme$subsystem", 00:20:54.026 "trtype": "$TEST_TRANSPORT", 00:20:54.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.026 "adrfam": "ipv4", 00:20:54.026 "trsvcid": "$NVMF_PORT", 00:20:54.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.026 "hdgst": ${hdgst:-false}, 00:20:54.026 "ddgst": ${ddgst:-false} 00:20:54.026 }, 00:20:54.026 "method": "bdev_nvme_attach_controller" 00:20:54.026 } 00:20:54.026 EOF 00:20:54.026 )") 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.026 { 00:20:54.026 "params": { 00:20:54.026 "name": "Nvme$subsystem", 00:20:54.026 "trtype": "$TEST_TRANSPORT", 00:20:54.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.026 "adrfam": "ipv4", 00:20:54.026 "trsvcid": "$NVMF_PORT", 00:20:54.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.026 "hdgst": ${hdgst:-false}, 00:20:54.026 "ddgst": ${ddgst:-false} 00:20:54.026 }, 00:20:54.026 "method": "bdev_nvme_attach_controller" 00:20:54.026 } 00:20:54.026 EOF 00:20:54.026 )") 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.026 { 00:20:54.026 "params": { 00:20:54.026 "name": "Nvme$subsystem", 00:20:54.026 "trtype": "$TEST_TRANSPORT", 00:20:54.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.026 "adrfam": "ipv4", 00:20:54.026 "trsvcid": "$NVMF_PORT", 00:20:54.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.026 "hdgst": ${hdgst:-false}, 00:20:54.026 "ddgst": ${ddgst:-false} 00:20:54.026 }, 00:20:54.026 "method": "bdev_nvme_attach_controller" 00:20:54.026 } 00:20:54.026 EOF 00:20:54.026 )") 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:54.026 [2024-11-17 14:30:43.144169] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:20:54.026 [2024-11-17 14:30:43.144217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520409 ] 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.026 { 00:20:54.026 "params": { 00:20:54.026 "name": "Nvme$subsystem", 00:20:54.026 "trtype": "$TEST_TRANSPORT", 00:20:54.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.026 "adrfam": "ipv4", 00:20:54.026 "trsvcid": "$NVMF_PORT", 00:20:54.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.026 "hdgst": ${hdgst:-false}, 00:20:54.026 "ddgst": ${ddgst:-false} 00:20:54.026 }, 00:20:54.026 "method": "bdev_nvme_attach_controller" 00:20:54.026 } 00:20:54.026 EOF 00:20:54.026 )") 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.026 { 00:20:54.026 "params": { 00:20:54.026 "name": "Nvme$subsystem", 00:20:54.026 "trtype": "$TEST_TRANSPORT", 00:20:54.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.026 "adrfam": "ipv4", 00:20:54.026 "trsvcid": "$NVMF_PORT", 00:20:54.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.026 "hdgst": ${hdgst:-false}, 00:20:54.026 "ddgst": ${ddgst:-false} 00:20:54.026 }, 00:20:54.026 "method": "bdev_nvme_attach_controller" 00:20:54.026 } 00:20:54.026 EOF 00:20:54.026 )") 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.026 { 00:20:54.026 "params": { 00:20:54.026 "name": "Nvme$subsystem", 00:20:54.026 "trtype": "$TEST_TRANSPORT", 00:20:54.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.026 "adrfam": "ipv4", 00:20:54.026 "trsvcid": "$NVMF_PORT", 00:20:54.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.026 "hdgst": ${hdgst:-false}, 00:20:54.026 "ddgst": ${ddgst:-false} 00:20:54.026 }, 00:20:54.026 "method": "bdev_nvme_attach_controller" 00:20:54.026 } 00:20:54.026 EOF 00:20:54.026 )") 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:54.026 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:54.026 "params": { 00:20:54.026 "name": "Nvme1", 00:20:54.026 "trtype": "tcp", 00:20:54.026 "traddr": "10.0.0.2", 00:20:54.026 "adrfam": "ipv4", 00:20:54.026 "trsvcid": "4420", 00:20:54.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.026 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:54.026 "hdgst": false, 00:20:54.026 "ddgst": false 00:20:54.026 }, 00:20:54.026 "method": "bdev_nvme_attach_controller" 00:20:54.026 },{ 00:20:54.026 "params": { 00:20:54.026 "name": "Nvme2", 00:20:54.026 "trtype": "tcp", 00:20:54.026 "traddr": "10.0.0.2", 00:20:54.026 "adrfam": "ipv4", 00:20:54.026 "trsvcid": "4420", 00:20:54.026 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:54.026 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:54.026 "hdgst": false, 00:20:54.026 "ddgst": false 00:20:54.026 }, 00:20:54.027 "method": "bdev_nvme_attach_controller" 00:20:54.027 },{ 00:20:54.027 "params": { 00:20:54.027 "name": "Nvme3", 00:20:54.027 "trtype": "tcp", 00:20:54.027 "traddr": "10.0.0.2", 00:20:54.027 "adrfam": "ipv4", 00:20:54.027 "trsvcid": "4420", 00:20:54.027 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:54.027 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:54.027 "hdgst": false, 00:20:54.027 "ddgst": false 00:20:54.027 }, 00:20:54.027 "method": "bdev_nvme_attach_controller" 00:20:54.027 },{ 00:20:54.027 "params": { 00:20:54.027 "name": "Nvme4", 00:20:54.027 "trtype": "tcp", 00:20:54.027 "traddr": "10.0.0.2", 00:20:54.027 "adrfam": "ipv4", 00:20:54.027 "trsvcid": "4420", 00:20:54.027 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:54.027 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:54.027 "hdgst": false, 00:20:54.027 "ddgst": false 00:20:54.027 }, 00:20:54.027 "method": "bdev_nvme_attach_controller" 00:20:54.027 },{ 00:20:54.027 "params": { 00:20:54.027 "name": "Nvme5", 00:20:54.027 "trtype": "tcp", 00:20:54.027 "traddr": "10.0.0.2", 00:20:54.027 "adrfam": "ipv4", 00:20:54.027 "trsvcid": "4420", 00:20:54.027 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:54.027 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:54.027 "hdgst": false, 00:20:54.027 "ddgst": false 00:20:54.027 }, 00:20:54.027 "method": "bdev_nvme_attach_controller" 00:20:54.027 },{ 00:20:54.027 "params": { 00:20:54.027 "name": "Nvme6", 00:20:54.027 "trtype": "tcp", 00:20:54.027 "traddr": "10.0.0.2", 00:20:54.027 "adrfam": "ipv4", 00:20:54.027 "trsvcid": "4420", 00:20:54.027 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:54.027 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:54.027 "hdgst": false, 00:20:54.027 "ddgst": false 00:20:54.027 }, 00:20:54.027 "method": "bdev_nvme_attach_controller" 00:20:54.027 },{ 00:20:54.027 "params": { 00:20:54.027 "name": "Nvme7", 00:20:54.027 "trtype": "tcp", 00:20:54.027 "traddr": "10.0.0.2", 00:20:54.027 "adrfam": "ipv4", 00:20:54.027 "trsvcid": "4420", 00:20:54.027 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:54.027 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:54.027 "hdgst": false, 00:20:54.027 "ddgst": false 00:20:54.027 }, 00:20:54.027 "method": "bdev_nvme_attach_controller" 00:20:54.027 },{ 00:20:54.027 "params": { 00:20:54.027 "name": "Nvme8", 00:20:54.027 "trtype": "tcp", 00:20:54.027 "traddr": "10.0.0.2", 00:20:54.027 "adrfam": "ipv4", 00:20:54.027 "trsvcid": "4420", 00:20:54.027 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:54.027 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:54.027 "hdgst": false, 00:20:54.027 "ddgst": false 00:20:54.027 }, 00:20:54.027 "method": "bdev_nvme_attach_controller" 00:20:54.027 },{ 00:20:54.027 "params": { 00:20:54.027 "name": "Nvme9", 00:20:54.027 "trtype": "tcp", 00:20:54.027 "traddr": "10.0.0.2", 00:20:54.027 "adrfam": "ipv4", 00:20:54.027 "trsvcid": "4420", 00:20:54.027 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:54.027 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:54.027 "hdgst": false, 00:20:54.027 "ddgst": false 00:20:54.027 }, 00:20:54.027 "method": "bdev_nvme_attach_controller" 00:20:54.027 },{ 00:20:54.027 "params": { 00:20:54.027 "name": "Nvme10", 00:20:54.027 "trtype": "tcp", 00:20:54.027 "traddr": "10.0.0.2", 00:20:54.027 "adrfam": "ipv4", 00:20:54.027 "trsvcid": "4420", 00:20:54.027 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:54.027 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:54.027 "hdgst": false, 00:20:54.027 "ddgst": false 00:20:54.027 }, 00:20:54.027 "method": "bdev_nvme_attach_controller" 00:20:54.027 }' 00:20:54.027 [2024-11-17 14:30:43.219741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.286 [2024-11-17 14:30:43.262149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.662 Running I/O for 10 seconds... 00:20:55.921 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.921 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:55.921 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:55.921 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.921 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.921 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.921 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:55.921 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:55.921 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:55.921 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:55.921 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:55.921 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:55.921 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:55.921 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:55.921 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:55.921 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.921 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.921 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.921 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:55.921 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:55.921 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:56.180 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:56.180 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:56.180 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:56.180 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:56.180 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.180 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:56.180 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.180 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:56.180 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:56.180 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:56.180 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:56.180 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:56.180 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1520409 00:20:56.180 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1520409 ']' 00:20:56.180 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1520409 00:20:56.180 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:56.180 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.180 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1520409 00:20:56.439 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:56.439 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:56.439 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1520409' 00:20:56.439 killing process with pid 1520409 00:20:56.439 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1520409 00:20:56.439 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1520409 00:20:56.439 Received shutdown signal, test time was about 0.823741 seconds 00:20:56.439 00:20:56.439 Latency(us) 00:20:56.439 [2024-11-17T13:30:45.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.439 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.439 Verification LBA range: start 0x0 length 0x400 00:20:56.439 Nvme1n1 : 0.80 241.40 15.09 0.00 0.00 261771.50 16526.47 221568.67 00:20:56.439 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.439 Verification LBA range: start 0x0 length 0x400 00:20:56.439 Nvme2n1 : 0.79 250.00 15.62 0.00 0.00 245735.18 3504.75 216097.84 00:20:56.439 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.439 Verification LBA range: start 0x0 length 0x400 00:20:56.439 Nvme3n1 : 0.81 320.86 20.05 0.00 0.00 188289.14 4843.97 218833.25 00:20:56.439 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.439 Verification LBA range: start 0x0 length 0x400 00:20:56.439 Nvme4n1 : 0.82 313.08 19.57 0.00 0.00 189920.17 15044.79 221568.67 00:20:56.439 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.439 Verification LBA range: start 0x0 length 0x400 00:20:56.439 Nvme5n1 : 0.82 311.03 19.44 0.00 0.00 186449.47 16526.47 222480.47 00:20:56.439 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.439 Verification LBA range: start 0x0 length 0x400 00:20:56.439 Nvme6n1 : 0.80 238.89 14.93 0.00 0.00 238005.80 18350.08 223392.28 00:20:56.439 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.439 Verification LBA range: start 0x0 length 0x400 00:20:56.439 Nvme7n1 : 0.82 311.30 19.46 0.00 0.00 179057.64 14075.99 223392.28 00:20:56.439 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.439 Verification LBA range: start 0x0 length 0x400 00:20:56.439 Nvme8n1 : 0.80 252.80 15.80 0.00 0.00 212062.07 7265.95 201508.95 00:20:56.440 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.440 Verification LBA range: start 0x0 length 0x400 00:20:56.440 Nvme9n1 : 0.81 237.71 14.86 0.00 0.00 223427.90 18122.13 240716.58 00:20:56.440 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.440 Verification LBA range: start 0x0 length 0x400 00:20:56.440 Nvme10n1 : 0.81 236.32 14.77 0.00 0.00 219821.04 28721.86 228863.11 00:20:56.440 [2024-11-17T13:30:45.665Z] =================================================================================================================== 00:20:56.440 [2024-11-17T13:30:45.665Z] Total : 2713.39 169.59 0.00 0.00 211145.19 3504.75 240716.58 00:20:56.698 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1520354 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:57.634 rmmod nvme_tcp 00:20:57.634 rmmod nvme_fabrics 00:20:57.634 rmmod nvme_keyring 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1520354 ']' 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1520354 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1520354 ']' 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1520354 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1520354 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1520354' 00:20:57.634 killing process with pid 1520354 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1520354 00:20:57.634 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1520354 00:20:58.202 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:58.202 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:58.202 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:58.202 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:58.203 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:58.203 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:58.203 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:58.203 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:58.203 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:58.203 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.203 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.203 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.106 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:00.106 00:21:00.106 real 0m7.310s 00:21:00.106 user 0m21.370s 00:21:00.106 sys 0m1.343s 00:21:00.106 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.106 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:00.106 ************************************ 00:21:00.106 END TEST nvmf_shutdown_tc2 00:21:00.106 ************************************ 00:21:00.106 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:00.106 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:00.106 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.106 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:00.106 ************************************ 00:21:00.106 START TEST nvmf_shutdown_tc3 00:21:00.106 ************************************ 00:21:00.106 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:00.106 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:00.106 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:00.106 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:00.106 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.106 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:00.106 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:00.106 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:00.106 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.106 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.106 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:00.365 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:00.366 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:00.366 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:00.366 Found net devices under 0000:86:00.0: cvl_0_0 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:00.366 Found net devices under 0000:86:00.1: cvl_0_1 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:00.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:21:00.366 00:21:00.366 --- 10.0.0.2 ping statistics --- 00:21:00.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.366 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:21:00.366 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:21:00.626 00:21:00.626 --- 10.0.0.1 ping statistics --- 00:21:00.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.626 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1521673 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1521673 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1521673 ']' 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.626 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:00.626 [2024-11-17 14:30:49.696791] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:21:00.626 [2024-11-17 14:30:49.696836] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.626 [2024-11-17 14:30:49.777302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:00.626 [2024-11-17 14:30:49.817541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.626 [2024-11-17 14:30:49.817581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.626 [2024-11-17 14:30:49.817587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.626 [2024-11-17 14:30:49.817593] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.626 [2024-11-17 14:30:49.817598] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.626 [2024-11-17 14:30:49.819244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.626 [2024-11-17 14:30:49.819380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:00.626 [2024-11-17 14:30:49.819485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.626 [2024-11-17 14:30:49.819485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.563 [2024-11-17 14:30:50.574676] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.563 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.563 Malloc1 00:21:01.563 [2024-11-17 14:30:50.682376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.563 Malloc2 00:21:01.563 Malloc3 00:21:01.821 Malloc4 00:21:01.821 Malloc5 00:21:01.821 Malloc6 00:21:01.821 Malloc7 00:21:01.821 Malloc8 00:21:01.821 Malloc9 00:21:02.081 Malloc10 00:21:02.081 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.081 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:02.081 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:02.081 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:02.081 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1521953 00:21:02.081 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1521953 /var/tmp/bdevperf.sock 00:21:02.081 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1521953 ']' 00:21:02.081 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.081 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:02.081 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.081 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:02.081 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.081 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:02.081 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.081 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:02.081 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:02.081 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:02.081 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:02.081 { 00:21:02.081 "params": { 00:21:02.081 "name": "Nvme$subsystem", 00:21:02.081 "trtype": "$TEST_TRANSPORT", 00:21:02.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.081 "adrfam": "ipv4", 00:21:02.081 "trsvcid": "$NVMF_PORT", 00:21:02.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.081 "hdgst": ${hdgst:-false}, 00:21:02.081 "ddgst": ${ddgst:-false} 00:21:02.081 }, 00:21:02.081 "method": "bdev_nvme_attach_controller" 00:21:02.081 } 00:21:02.081 EOF 00:21:02.081 )") 00:21:02.081 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:02.081 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:02.081 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:02.081 { 00:21:02.081 "params": { 00:21:02.081 "name": "Nvme$subsystem", 00:21:02.082 "trtype": "$TEST_TRANSPORT", 00:21:02.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.082 "adrfam": "ipv4", 00:21:02.082 "trsvcid": "$NVMF_PORT", 00:21:02.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.082 "hdgst": ${hdgst:-false}, 00:21:02.082 "ddgst": ${ddgst:-false} 00:21:02.082 }, 00:21:02.082 "method": "bdev_nvme_attach_controller" 00:21:02.082 } 00:21:02.082 EOF 00:21:02.082 )") 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:02.082 { 00:21:02.082 "params": { 00:21:02.082 "name": "Nvme$subsystem", 00:21:02.082 "trtype": "$TEST_TRANSPORT", 00:21:02.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.082 "adrfam": "ipv4", 00:21:02.082 "trsvcid": "$NVMF_PORT", 00:21:02.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.082 "hdgst": ${hdgst:-false}, 00:21:02.082 "ddgst": ${ddgst:-false} 00:21:02.082 }, 00:21:02.082 "method": "bdev_nvme_attach_controller" 00:21:02.082 } 00:21:02.082 EOF 00:21:02.082 )") 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:02.082 { 00:21:02.082 "params": { 00:21:02.082 "name": "Nvme$subsystem", 00:21:02.082 "trtype": "$TEST_TRANSPORT", 00:21:02.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.082 "adrfam": "ipv4", 00:21:02.082 "trsvcid": "$NVMF_PORT", 00:21:02.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.082 "hdgst": ${hdgst:-false}, 00:21:02.082 "ddgst": ${ddgst:-false} 00:21:02.082 }, 00:21:02.082 "method": "bdev_nvme_attach_controller" 00:21:02.082 } 00:21:02.082 EOF 00:21:02.082 )") 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:02.082 { 00:21:02.082 "params": { 00:21:02.082 "name": "Nvme$subsystem", 00:21:02.082 "trtype": "$TEST_TRANSPORT", 00:21:02.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.082 "adrfam": "ipv4", 00:21:02.082 "trsvcid": "$NVMF_PORT", 00:21:02.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.082 "hdgst": ${hdgst:-false}, 00:21:02.082 "ddgst": ${ddgst:-false} 00:21:02.082 }, 00:21:02.082 "method": "bdev_nvme_attach_controller" 00:21:02.082 } 00:21:02.082 EOF 00:21:02.082 )") 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:02.082 { 00:21:02.082 "params": { 00:21:02.082 "name": "Nvme$subsystem", 00:21:02.082 "trtype": "$TEST_TRANSPORT", 00:21:02.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.082 "adrfam": "ipv4", 00:21:02.082 "trsvcid": "$NVMF_PORT", 00:21:02.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.082 "hdgst": ${hdgst:-false}, 00:21:02.082 "ddgst": ${ddgst:-false} 00:21:02.082 }, 00:21:02.082 "method": "bdev_nvme_attach_controller" 00:21:02.082 } 00:21:02.082 EOF 00:21:02.082 )") 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:02.082 { 00:21:02.082 "params": { 00:21:02.082 "name": "Nvme$subsystem", 00:21:02.082 "trtype": "$TEST_TRANSPORT", 00:21:02.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.082 "adrfam": "ipv4", 00:21:02.082 "trsvcid": "$NVMF_PORT", 00:21:02.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.082 "hdgst": ${hdgst:-false}, 00:21:02.082 "ddgst": ${ddgst:-false} 00:21:02.082 }, 00:21:02.082 "method": "bdev_nvme_attach_controller" 00:21:02.082 } 00:21:02.082 EOF 00:21:02.082 )") 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:02.082 [2024-11-17 14:30:51.159313] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:21:02.082 [2024-11-17 14:30:51.159367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521953 ] 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:02.082 { 00:21:02.082 "params": { 00:21:02.082 "name": "Nvme$subsystem", 00:21:02.082 "trtype": "$TEST_TRANSPORT", 00:21:02.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.082 "adrfam": "ipv4", 00:21:02.082 "trsvcid": "$NVMF_PORT", 00:21:02.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.082 "hdgst": ${hdgst:-false}, 00:21:02.082 "ddgst": ${ddgst:-false} 00:21:02.082 }, 00:21:02.082 "method": "bdev_nvme_attach_controller" 00:21:02.082 } 00:21:02.082 EOF 00:21:02.082 )") 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:02.082 { 00:21:02.082 "params": { 00:21:02.082 "name": "Nvme$subsystem", 00:21:02.082 "trtype": "$TEST_TRANSPORT", 00:21:02.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.082 "adrfam": "ipv4", 00:21:02.082 "trsvcid": "$NVMF_PORT", 00:21:02.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.082 "hdgst": ${hdgst:-false}, 00:21:02.082 "ddgst": ${ddgst:-false} 00:21:02.082 }, 00:21:02.082 "method": "bdev_nvme_attach_controller" 00:21:02.082 } 00:21:02.082 EOF 00:21:02.082 )") 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:02.082 { 00:21:02.082 "params": { 00:21:02.082 "name": "Nvme$subsystem", 00:21:02.082 "trtype": "$TEST_TRANSPORT", 00:21:02.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.082 "adrfam": "ipv4", 00:21:02.082 "trsvcid": "$NVMF_PORT", 00:21:02.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.082 "hdgst": ${hdgst:-false}, 00:21:02.082 "ddgst": ${ddgst:-false} 00:21:02.082 }, 00:21:02.082 "method": "bdev_nvme_attach_controller" 00:21:02.082 } 00:21:02.082 EOF 00:21:02.082 )") 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:02.082 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:02.082 "params": { 00:21:02.082 "name": "Nvme1", 00:21:02.082 "trtype": "tcp", 00:21:02.082 "traddr": "10.0.0.2", 00:21:02.082 "adrfam": "ipv4", 00:21:02.082 "trsvcid": "4420", 00:21:02.082 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.082 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:02.082 "hdgst": false, 00:21:02.082 "ddgst": false 00:21:02.082 }, 00:21:02.082 "method": "bdev_nvme_attach_controller" 00:21:02.082 },{ 00:21:02.082 "params": { 00:21:02.082 "name": "Nvme2", 00:21:02.082 "trtype": "tcp", 00:21:02.082 "traddr": "10.0.0.2", 00:21:02.082 "adrfam": "ipv4", 00:21:02.082 "trsvcid": "4420", 00:21:02.082 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:02.082 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:02.082 "hdgst": false, 00:21:02.082 "ddgst": false 00:21:02.082 }, 00:21:02.082 "method": "bdev_nvme_attach_controller" 00:21:02.082 },{ 00:21:02.082 "params": { 00:21:02.082 "name": "Nvme3", 00:21:02.082 "trtype": "tcp", 00:21:02.082 "traddr": "10.0.0.2", 00:21:02.082 "adrfam": "ipv4", 00:21:02.082 "trsvcid": "4420", 00:21:02.082 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:02.082 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:02.082 "hdgst": false, 00:21:02.083 "ddgst": false 00:21:02.083 }, 00:21:02.083 "method": "bdev_nvme_attach_controller" 00:21:02.083 },{ 00:21:02.083 "params": { 00:21:02.083 "name": "Nvme4", 00:21:02.083 "trtype": "tcp", 00:21:02.083 "traddr": "10.0.0.2", 00:21:02.083 "adrfam": "ipv4", 00:21:02.083 "trsvcid": "4420", 00:21:02.083 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:02.083 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:02.083 "hdgst": false, 00:21:02.083 "ddgst": false 00:21:02.083 }, 00:21:02.083 "method": "bdev_nvme_attach_controller" 00:21:02.083 },{ 00:21:02.083 "params": { 00:21:02.083 "name": "Nvme5", 00:21:02.083 "trtype": "tcp", 00:21:02.083 "traddr": "10.0.0.2", 00:21:02.083 "adrfam": "ipv4", 00:21:02.083 "trsvcid": "4420", 00:21:02.083 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:02.083 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:02.083 "hdgst": false, 00:21:02.083 "ddgst": false 00:21:02.083 }, 00:21:02.083 "method": "bdev_nvme_attach_controller" 00:21:02.083 },{ 00:21:02.083 "params": { 00:21:02.083 "name": "Nvme6", 00:21:02.083 "trtype": "tcp", 00:21:02.083 "traddr": "10.0.0.2", 00:21:02.083 "adrfam": "ipv4", 00:21:02.083 "trsvcid": "4420", 00:21:02.083 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:02.083 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:02.083 "hdgst": false, 00:21:02.083 "ddgst": false 00:21:02.083 }, 00:21:02.083 "method": "bdev_nvme_attach_controller" 00:21:02.083 },{ 00:21:02.083 "params": { 00:21:02.083 "name": "Nvme7", 00:21:02.083 "trtype": "tcp", 00:21:02.083 "traddr": "10.0.0.2", 00:21:02.083 "adrfam": "ipv4", 00:21:02.083 "trsvcid": "4420", 00:21:02.083 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:02.083 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:02.083 "hdgst": false, 00:21:02.083 "ddgst": false 00:21:02.083 }, 00:21:02.083 "method": "bdev_nvme_attach_controller" 00:21:02.083 },{ 00:21:02.083 "params": { 00:21:02.083 "name": "Nvme8", 00:21:02.083 "trtype": "tcp", 00:21:02.083 "traddr": "10.0.0.2", 00:21:02.083 "adrfam": "ipv4", 00:21:02.083 "trsvcid": "4420", 00:21:02.083 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:02.083 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:02.083 "hdgst": false, 00:21:02.083 "ddgst": false 00:21:02.083 }, 00:21:02.083 "method": "bdev_nvme_attach_controller" 00:21:02.083 },{ 00:21:02.083 "params": { 00:21:02.083 "name": "Nvme9", 00:21:02.083 "trtype": "tcp", 00:21:02.083 "traddr": "10.0.0.2", 00:21:02.083 "adrfam": "ipv4", 00:21:02.083 "trsvcid": "4420", 00:21:02.083 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:02.083 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:02.083 "hdgst": false, 00:21:02.083 "ddgst": false 00:21:02.083 }, 00:21:02.083 "method": "bdev_nvme_attach_controller" 00:21:02.083 },{ 00:21:02.083 "params": { 00:21:02.083 "name": "Nvme10", 00:21:02.083 "trtype": "tcp", 00:21:02.083 "traddr": "10.0.0.2", 00:21:02.083 "adrfam": "ipv4", 00:21:02.083 "trsvcid": "4420", 00:21:02.083 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:02.083 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:02.083 "hdgst": false, 00:21:02.083 "ddgst": false 00:21:02.083 }, 00:21:02.083 "method": "bdev_nvme_attach_controller" 00:21:02.083 }' 00:21:02.083 [2024-11-17 14:30:51.234949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.083 [2024-11-17 14:30:51.276031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.991 Running I/O for 10 seconds... 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:03.991 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1521673 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1521673 ']' 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1521673 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1521673 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1521673' 00:21:04.255 killing process with pid 1521673 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1521673 00:21:04.255 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1521673 00:21:04.255 [2024-11-17 14:30:53.454399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.255 [2024-11-17 14:30:53.454457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.255 [2024-11-17 14:30:53.454466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.255 [2024-11-17 14:30:53.454473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.255 [2024-11-17 14:30:53.454480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.255 [2024-11-17 14:30:53.454487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.454850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053070 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.455880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.455911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.455920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.455927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.455934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.455941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.455947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.455955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.455961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.455968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.455974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.455981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.455987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.455994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.456001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.456010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.456017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.456023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.456034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.456041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.456047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.456053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.456059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.456065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.456072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.456078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.256 [2024-11-17 14:30:53.456084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.456309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5790 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.257 [2024-11-17 14:30:53.457712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.457718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.457724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.457730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.457736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.457742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.457748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.457754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.457760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.457765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.457771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.457777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.457783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.457789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.457795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.457801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.457812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.457818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053560 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.458959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.458986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.458994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.459387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053a30 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.460428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.460451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.460460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.258 [2024-11-17 14:30:53.460466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.460846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f20 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.259 [2024-11-17 14:30:53.461634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.461866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10542a0 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.462673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054770 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.260 [2024-11-17 14:30:53.463591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.463784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1054c40 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.467663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.467693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.467703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.467711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.467718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.467726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.467734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.467740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.467747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb98150 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.467778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.467787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.467794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.467801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.467808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.467815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.467826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.467833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.467839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d5c0 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.467872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.467881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.467888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.467895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.467902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.467908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.467916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.467923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.467930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x636610 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.467953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.467961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.467968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.467975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.467982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.467989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.467997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.468003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.468009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb50ca0 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.468033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.468041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.468049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.468055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.468062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.468071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.468078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.468085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.468091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7221b0 is same with the state(6) to be set 00:21:04.261 [2024-11-17 14:30:53.468113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.468121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.468129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.468135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-17 14:30:53.468142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.261 [2024-11-17 14:30:53.468149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.262 [2024-11-17 14:30:53.468163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4de30 is same with the state(6) to be set 00:21:04.262 [2024-11-17 14:30:53.468193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.262 [2024-11-17 14:30:53.468201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.262 [2024-11-17 14:30:53.468215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.262 [2024-11-17 14:30:53.468229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.262 [2024-11-17 14:30:53.468242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x721d50 is same with the state(6) to be set 00:21:04.262 [2024-11-17 14:30:53.468271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.262 [2024-11-17 14:30:53.468279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.262 [2024-11-17 14:30:53.468295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.262 [2024-11-17 14:30:53.468310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.262 [2024-11-17 14:30:53.468324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71fc70 is same with the state(6) to be set 00:21:04.262 [2024-11-17 14:30:53.468617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.468985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.468993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.469000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.469008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.469014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.469023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.469029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.469037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.469044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.469052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-17 14:30:53.469058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-17 14:30:53.469066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.469599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.263 [2024-11-17 14:30:53.469606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-17 14:30:53.471240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:04.263 [2024-11-17 14:30:53.471271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4d5c0 (9): Bad file descriptor 00:21:04.535 [2024-11-17 14:30:53.472325] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.535 [2024-11-17 14:30:53.472511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.535 [2024-11-17 14:30:53.472528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4d5c0 with addr=10.0.0.2, port=4420 00:21:04.535 [2024-11-17 14:30:53.472536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d5c0 is same with the state(6) to be set 00:21:04.535 [2024-11-17 14:30:53.472582] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.535 [2024-11-17 14:30:53.472626] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.535 [2024-11-17 14:30:53.472669] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.535 [2024-11-17 14:30:53.472710] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.535 [2024-11-17 14:30:53.472750] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.535 [2024-11-17 14:30:53.472916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4d5c0 (9): Bad file descriptor 00:21:04.535 [2024-11-17 14:30:53.473108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:04.535 [2024-11-17 14:30:53.473120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:04.535 [2024-11-17 14:30:53.473130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:04.535 [2024-11-17 14:30:53.473139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:04.535 [2024-11-17 14:30:53.473820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.535 [2024-11-17 14:30:53.473836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.535 [2024-11-17 14:30:53.473851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.535 [2024-11-17 14:30:53.473859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.535 [2024-11-17 14:30:53.473871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.535 [2024-11-17 14:30:53.473879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.535 [2024-11-17 14:30:53.473887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.535 [2024-11-17 14:30:53.473894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.535 [2024-11-17 14:30:53.473904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.535 [2024-11-17 14:30:53.473911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.535 [2024-11-17 14:30:53.473919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.535 [2024-11-17 14:30:53.473926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.535 [2024-11-17 14:30:53.473934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.535 [2024-11-17 14:30:53.473941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.535 [2024-11-17 14:30:53.473949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.535 [2024-11-17 14:30:53.473955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.535 [2024-11-17 14:30:53.473964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.535 [2024-11-17 14:30:53.473970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.535 [2024-11-17 14:30:53.473979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.535 [2024-11-17 14:30:53.473985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.535 [2024-11-17 14:30:53.473993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.535 [2024-11-17 14:30:53.474000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.536 [2024-11-17 14:30:53.474008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.536 [2024-11-17 14:30:53.474017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.536 [2024-11-17 14:30:53.474025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.536 [2024-11-17 14:30:53.474032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.536 [2024-11-17 14:30:53.474040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.536 [2024-11-17 14:30:53.474047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.536 [2024-11-17 14:30:53.474055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.536 [2024-11-17 14:30:53.474066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.536 [2024-11-17 14:30:53.474074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.536 [2024-11-17 14:30:53.474081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.536 [2024-11-17 14:30:53.474089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.536 [2024-11-17 14:30:53.474096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.536 [2024-11-17 14:30:53.474105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.536 [2024-11-17 14:30:53.474111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.536 [2024-11-17 14:30:53.474120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.536 [2024-11-17 14:30:53.474127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.536 [2024-11-17 14:30:53.474135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.536 [2024-11-17 14:30:53.474142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.536 [2024-11-17 14:30:53.474150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.536 [2024-11-17 14:30:53.474157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.536 [2024-11-17 14:30:53.474165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.536 [2024-11-17 14:30:53.474172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.536 [2024-11-17 14:30:53.474180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.536 [2024-11-17 14:30:53.474187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.536 [2024-11-17 14:30:53.474196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.536 [2024-11-17 14:30:53.474203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.536 [2024-11-17 14:30:53.474211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.536 [2024-11-17 14:30:53.474217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.536 [2024-11-17 14:30:53.474226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.536 [2024-11-17 14:30:53.474233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.536 [2024-11-17 14:30:53.474241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.536 [2024-11-17 14:30:53.474248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.536 [2024-11-17 14:30:53.474258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.536 [2024-11-17 14:30:53.474264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.536 [2024-11-17 14:30:53.474272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.536 [2024-11-17 14:30:53.474279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.536 [2024-11-17 14:30:53.474288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.536 [2024-11-17 14:30:53.474295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.536 [2024-11-17 14:30:53.474303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.536 [2024-11-17 14:30:53.474309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.536 [2024-11-17 14:30:53.474318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.536 [2024-11-17 14:30:53.474638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.536 [2024-11-17 14:30:53.474859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.474997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055130 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.537 [2024-11-17 14:30:53.475899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.538 [2024-11-17 14:30:53.475905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.538 [2024-11-17 14:30:53.475911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.538 [2024-11-17 14:30:53.475917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.538 [2024-11-17 14:30:53.475923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.538 [2024-11-17 14:30:53.475929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.538 [2024-11-17 14:30:53.475935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.538 [2024-11-17 14:30:53.475941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.538 [2024-11-17 14:30:53.475947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.538 [2024-11-17 14:30:53.475953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.538 [2024-11-17 14:30:53.475959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.538 [2024-11-17 14:30:53.475965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.538 [2024-11-17 14:30:53.475971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.538 [2024-11-17 14:30:53.475977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.538 [2024-11-17 14:30:53.475982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.538 [2024-11-17 14:30:53.475988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.538 [2024-11-17 14:30:53.475994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.538 [2024-11-17 14:30:53.476000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.538 [2024-11-17 14:30:53.476006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055600 is same with the state(6) to be set 00:21:04.538 [2024-11-17 14:30:53.482952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.482965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.482974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.482983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.482990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.482998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.538 [2024-11-17 14:30:53.483332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.538 [2024-11-17 14:30:53.483339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.539 [2024-11-17 14:30:53.483373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.539 [2024-11-17 14:30:53.483383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.539 [2024-11-17 14:30:53.483395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.539 [2024-11-17 14:30:53.483404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.539 [2024-11-17 14:30:53.483415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.539 [2024-11-17 14:30:53.483424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.539 [2024-11-17 14:30:53.483436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.539 [2024-11-17 14:30:53.483445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.539 [2024-11-17 14:30:53.483456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.539 [2024-11-17 14:30:53.483465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.539 [2024-11-17 14:30:53.483476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.539 [2024-11-17 14:30:53.483485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.539 [2024-11-17 14:30:53.483496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.539 [2024-11-17 14:30:53.483506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.539 [2024-11-17 14:30:53.483516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb28cf0 is same with the state(6) to be set 00:21:04.539 [2024-11-17 14:30:53.483738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.539 [2024-11-17 14:30:53.483754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.539 [2024-11-17 14:30:53.483766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.539 [2024-11-17 14:30:53.483776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.539 [2024-11-17 14:30:53.483787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.539 [2024-11-17 14:30:53.483796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.539 [2024-11-17 14:30:53.483806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.539 [2024-11-17 14:30:53.483815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.539 [2024-11-17 14:30:53.483824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb82990 is same with the state(6) to be set 00:21:04.539 [2024-11-17 14:30:53.483854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb98150 (9): Bad file descriptor 00:21:04.539 [2024-11-17 14:30:53.483890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.539 [2024-11-17 14:30:53.483901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.539 [2024-11-17 14:30:53.483911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.539 [2024-11-17 14:30:53.483920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.539 [2024-11-17 14:30:53.483930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.539 [2024-11-17 14:30:53.483939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.539 [2024-11-17 14:30:53.483949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.539 [2024-11-17 14:30:53.483958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.539 [2024-11-17 14:30:53.483966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53270 is same with the state(6) to be set 00:21:04.539 [2024-11-17 14:30:53.483987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x636610 (9): Bad file descriptor 00:21:04.539 [2024-11-17 14:30:53.484003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb50ca0 (9): Bad file descriptor 00:21:04.539 [2024-11-17 14:30:53.484022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7221b0 (9): Bad file descriptor 00:21:04.539 [2024-11-17 14:30:53.484040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4de30 (9): Bad file descriptor 00:21:04.539 [2024-11-17 14:30:53.484060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x721d50 (9): Bad file descriptor 00:21:04.539 [2024-11-17 14:30:53.484078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71fc70 (9): Bad file descriptor 00:21:04.539 [2024-11-17 14:30:53.485549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:04.539 [2024-11-17 14:30:53.485757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:04.539 [2024-11-17 14:30:53.485905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.539 [2024-11-17 14:30:53.485923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x636610 with addr=10.0.0.2, port=4420 00:21:04.539 [2024-11-17 14:30:53.485934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x636610 is same with the state(6) to be set 00:21:04.539 [2024-11-17 14:30:53.486545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.539 [2024-11-17 14:30:53.486566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4d5c0 with addr=10.0.0.2, port=4420 00:21:04.539 [2024-11-17 14:30:53.486577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d5c0 is same with the state(6) to be set 00:21:04.539 [2024-11-17 14:30:53.486590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x636610 (9): Bad file descriptor 00:21:04.539 [2024-11-17 14:30:53.486682] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.539 [2024-11-17 14:30:53.486735] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.539 [2024-11-17 14:30:53.486756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4d5c0 (9): Bad file descriptor 00:21:04.539 [2024-11-17 14:30:53.486768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:04.539 [2024-11-17 14:30:53.486781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:04.539 [2024-11-17 14:30:53.486792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:04.539 [2024-11-17 14:30:53.486802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:04.539 [2024-11-17 14:30:53.486874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:04.539 [2024-11-17 14:30:53.486884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:04.539 [2024-11-17 14:30:53.486893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:04.539 [2024-11-17 14:30:53.486901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:04.539 [2024-11-17 14:30:53.493731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb82990 (9): Bad file descriptor 00:21:04.539 [2024-11-17 14:30:53.493764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb53270 (9): Bad file descriptor 00:21:04.539 [2024-11-17 14:30:53.493893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.539 [2024-11-17 14:30:53.493904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.539 [2024-11-17 14:30:53.493917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.539 [2024-11-17 14:30:53.493925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.539 [2024-11-17 14:30:53.493934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.539 [2024-11-17 14:30:53.493941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.539 [2024-11-17 14:30:53.493950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.539 [2024-11-17 14:30:53.493957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.539 [2024-11-17 14:30:53.493965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.539 [2024-11-17 14:30:53.493972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.493981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.493988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.493996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.540 [2024-11-17 14:30:53.494375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.540 [2024-11-17 14:30:53.494383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.541 [2024-11-17 14:30:53.494888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.541 [2024-11-17 14:30:53.494895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.494903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.494910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.494918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9264e0 is same with the state(6) to be set 00:21:04.542 [2024-11-17 14:30:53.495968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.495982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.495994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.542 [2024-11-17 14:30:53.496524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.542 [2024-11-17 14:30:53.496531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.496988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.496997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.497004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.497014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9276c0 is same with the state(6) to be set 00:21:04.543 [2024-11-17 14:30:53.498063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.498078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.498090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.498097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.498106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.498114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.498123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-17 14:30:53.498131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.543 [2024-11-17 14:30:53.498139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-17 14:30:53.498683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.544 [2024-11-17 14:30:53.498691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.498698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.498707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.498714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.498722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.498729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.498738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.498746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.498754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.498762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.498770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.498777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.498785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.498792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.498801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.498808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.498816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.498823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.498832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.498839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.498847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.498854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.498863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.498870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.498879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.498886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.498894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.498901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.498910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.498917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.498925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.498932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.498942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.498949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.498958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.498965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.498973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.498980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.498988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.498995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.499006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.499013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.499022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.499029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.499037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.499045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.499053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.499060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.499069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.499076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.499083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46f00 is same with the state(6) to be set 00:21:04.545 [2024-11-17 14:30:53.500119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.500134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.500144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.500152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.500161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-17 14:30:53.500168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.545 [2024-11-17 14:30:53.500179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-17 14:30:53.500720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-17 14:30:53.500727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.500736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.500743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.500751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.500758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.500767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.500775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.500784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.500791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.500799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.500807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.500815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.500822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.500830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.500837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.500845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.500853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.500861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.500868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.500876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.500883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.500892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.500899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.500908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.500915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.500923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.500930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.500938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.500945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.500953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.500960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.500970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.500977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.500985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.500992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.501001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.501007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.501016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.501023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.501032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.501039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.501047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.501054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.501063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.501069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.501078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.501084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.501093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.501100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.501109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.501116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.501124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.501131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.501139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc481c0 is same with the state(6) to be set 00:21:04.547 [2024-11-17 14:30:53.502173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.502187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.502198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.502207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.502216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.502223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.502231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.502238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.502247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.502254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.502263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.502270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.502279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.502286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.502295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.502301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.547 [2024-11-17 14:30:53.502310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-17 14:30:53.502317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.548 [2024-11-17 14:30:53.502845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-17 14:30:53.502852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.502861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.502868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.502876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.502883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.502892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.502899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.502909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.502917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.502925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.502932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.502941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.502947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.502956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.502963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.502972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.502979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.502987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.502999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.503008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.503016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.503024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.503031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.503040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.503047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.503055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.503063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.503071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.503078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.503087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.503094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.503103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.503110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.503119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.503127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.503135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.503143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.503151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.503158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.503166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.503173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.503182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.503188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.503198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc496e0 is same with the state(6) to be set 00:21:04.549 [2024-11-17 14:30:53.504223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.504237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.504247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.504255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.504264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.504271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.504279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.504286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.504295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.504302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.504311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.504318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.504326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.504333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.504341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.504348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.504361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.504368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.504376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.504383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.504391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.504398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.504406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.504413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.504425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.504432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.504440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.504447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.504455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-17 14:30:53.504462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.549 [2024-11-17 14:30:53.504470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.550 [2024-11-17 14:30:53.504958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.550 [2024-11-17 14:30:53.504966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.504973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.504981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.504989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.504997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.505004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.505012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.505019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.505027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.505034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.505042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.505049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.505057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.505063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.505072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.505078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.505086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.505093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.505101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.505109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.505118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.505124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.505133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.505140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.505148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.505155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.505163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.505170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.505179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.505187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.505195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.505202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.505209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc40550 is same with the state(6) to be set 00:21:04.551 [2024-11-17 14:30:53.506186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:04.551 [2024-11-17 14:30:53.506201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:04.551 [2024-11-17 14:30:53.506210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:04.551 [2024-11-17 14:30:53.506219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:04.551 [2024-11-17 14:30:53.506282] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:04.551 [2024-11-17 14:30:53.506302] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:21:04.551 [2024-11-17 14:30:53.506382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:04.551 [2024-11-17 14:30:53.506393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:04.551 [2024-11-17 14:30:53.506673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.551 [2024-11-17 14:30:53.506688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7221b0 with addr=10.0.0.2, port=4420 00:21:04.551 [2024-11-17 14:30:53.506697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7221b0 is same with the state(6) to be set 00:21:04.551 [2024-11-17 14:30:53.506824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.551 [2024-11-17 14:30:53.506835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x721d50 with addr=10.0.0.2, port=4420 00:21:04.551 [2024-11-17 14:30:53.506842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x721d50 is same with the state(6) to be set 00:21:04.551 [2024-11-17 14:30:53.507051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.551 [2024-11-17 14:30:53.507061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x71fc70 with addr=10.0.0.2, port=4420 00:21:04.551 [2024-11-17 14:30:53.507069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71fc70 is same with the state(6) to be set 00:21:04.551 [2024-11-17 14:30:53.507216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.551 [2024-11-17 14:30:53.507226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4de30 with addr=10.0.0.2, port=4420 00:21:04.551 [2024-11-17 14:30:53.507233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4de30 is same with the state(6) to be set 00:21:04.551 [2024-11-17 14:30:53.508395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.508411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.508422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.508435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.508444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.508451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.508459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.508466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.508474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.508482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.508490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.508497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.508505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.551 [2024-11-17 14:30:53.508512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.551 [2024-11-17 14:30:53.508520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.508986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.508992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.509000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.509007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.552 [2024-11-17 14:30:53.509015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.552 [2024-11-17 14:30:53.509023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.509387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.509395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a715f0 is same with the state(6) to be set 00:21:04.553 [2024-11-17 14:30:53.510421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.510434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.510446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.510453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.510462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.510469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.510477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.510484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.510493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.510500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.510508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.510515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.510523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.510530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.510538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.510544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.510553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.510559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.510568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.553 [2024-11-17 14:30:53.510575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.553 [2024-11-17 14:30:53.510583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.510988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.510999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.511007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.511014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.511022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.511029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.511037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.511044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.511052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.511059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.511067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.511074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.511082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.511089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.511097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.554 [2024-11-17 14:30:53.511104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.554 [2024-11-17 14:30:53.511112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.555 [2024-11-17 14:30:53.511119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.555 [2024-11-17 14:30:53.511127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.555 [2024-11-17 14:30:53.511133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.555 [2024-11-17 14:30:53.511141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.555 [2024-11-17 14:30:53.511148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.555 [2024-11-17 14:30:53.511156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.555 [2024-11-17 14:30:53.511163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.555 [2024-11-17 14:30:53.511171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.555 [2024-11-17 14:30:53.511177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.555 [2024-11-17 14:30:53.511187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.555 [2024-11-17 14:30:53.511194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.555 [2024-11-17 14:30:53.511202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.555 [2024-11-17 14:30:53.511209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.555 [2024-11-17 14:30:53.511217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.555 [2024-11-17 14:30:53.511223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.555 [2024-11-17 14:30:53.511231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.555 [2024-11-17 14:30:53.511238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.555 [2024-11-17 14:30:53.511246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.555 [2024-11-17 14:30:53.511252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.555 [2024-11-17 14:30:53.511260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.555 [2024-11-17 14:30:53.511267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.555 [2024-11-17 14:30:53.511276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.555 [2024-11-17 14:30:53.511282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.555 [2024-11-17 14:30:53.511290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.555 [2024-11-17 14:30:53.511297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.555 [2024-11-17 14:30:53.511305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.555 [2024-11-17 14:30:53.511312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.555 [2024-11-17 14:30:53.511320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.555 [2024-11-17 14:30:53.511326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.555 [2024-11-17 14:30:53.511335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.555 [2024-11-17 14:30:53.511342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.555 [2024-11-17 14:30:53.511350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.555 [2024-11-17 14:30:53.511360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.555 [2024-11-17 14:30:53.511369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.555 [2024-11-17 14:30:53.511377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.555 [2024-11-17 14:30:53.511385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.555 [2024-11-17 14:30:53.511392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.555 [2024-11-17 14:30:53.511399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3f070 is same with the state(6) to be set 00:21:04.555 [2024-11-17 14:30:53.512608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:04.555 [2024-11-17 14:30:53.512625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:04.555 [2024-11-17 14:30:53.512635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:04.555 task offset: 16384 on job bdev=Nvme6n1 fails 00:21:04.555 00:21:04.555 Latency(us) 00:21:04.555 [2024-11-17T13:30:53.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.555 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.555 Job: Nvme1n1 ended in about 0.70 seconds with error 00:21:04.555 Verification LBA range: start 0x0 length 0x400 00:21:04.555 Nvme1n1 : 0.70 183.74 11.48 91.87 0.00 229225.81 17894.18 238892.97 00:21:04.555 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.555 Job: Nvme2n1 ended in about 0.70 seconds with error 00:21:04.555 Verification LBA range: start 0x0 length 0x400 00:21:04.555 Nvme2n1 : 0.70 183.19 11.45 91.59 0.00 224528.47 35332.45 219745.06 00:21:04.555 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.555 Job: Nvme3n1 ended in about 0.70 seconds with error 00:21:04.555 Verification LBA range: start 0x0 length 0x400 00:21:04.555 Nvme3n1 : 0.70 182.65 11.42 91.32 0.00 219900.88 16184.54 240716.58 00:21:04.555 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.555 Job: Nvme4n1 ended in about 0.70 seconds with error 00:21:04.555 Verification LBA range: start 0x0 length 0x400 00:21:04.555 Nvme4n1 : 0.70 182.12 11.38 91.06 0.00 215297.34 26442.35 206979.78 00:21:04.555 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.555 Job: Nvme5n1 ended in about 0.70 seconds with error 00:21:04.555 Verification LBA range: start 0x0 length 0x400 00:21:04.555 Nvme5n1 : 0.70 96.47 6.03 90.79 0.00 306401.52 19261.89 264423.51 00:21:04.555 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.555 Job: Nvme6n1 ended in about 0.67 seconds with error 00:21:04.555 Verification LBA range: start 0x0 length 0x400 00:21:04.555 Nvme6n1 : 0.67 190.51 11.91 95.25 0.00 194332.12 3234.06 238892.97 00:21:04.555 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.555 Job: Nvme7n1 ended in about 0.69 seconds with error 00:21:04.555 Verification LBA range: start 0x0 length 0x400 00:21:04.555 Nvme7n1 : 0.69 186.59 11.66 93.30 0.00 193723.58 16868.40 238892.97 00:21:04.555 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.555 Job: Nvme8n1 ended in about 0.71 seconds with error 00:21:04.555 Verification LBA range: start 0x0 length 0x400 00:21:04.555 Nvme8n1 : 0.71 180.01 11.25 90.00 0.00 196826.75 20629.59 231598.53 00:21:04.555 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.556 Job: Nvme9n1 ended in about 0.71 seconds with error 00:21:04.556 Verification LBA range: start 0x0 length 0x400 00:21:04.556 Nvme9n1 : 0.71 179.51 11.22 89.75 0.00 192221.12 18805.98 237069.36 00:21:04.556 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.556 Job: Nvme10n1 ended in about 0.71 seconds with error 00:21:04.556 Verification LBA range: start 0x0 length 0x400 00:21:04.556 Nvme10n1 : 0.71 90.54 5.66 90.54 0.00 277521.81 26784.28 257129.07 00:21:04.556 [2024-11-17T13:30:53.781Z] =================================================================================================================== 00:21:04.556 [2024-11-17T13:30:53.781Z] Total : 1655.30 103.46 915.48 0.00 220406.77 3234.06 264423.51 00:21:04.556 [2024-11-17 14:30:53.543249] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:04.556 [2024-11-17 14:30:53.543300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:04.556 [2024-11-17 14:30:53.543583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.556 [2024-11-17 14:30:53.543602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb50ca0 with addr=10.0.0.2, port=4420 00:21:04.556 [2024-11-17 14:30:53.543614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb50ca0 is same with the state(6) to be set 00:21:04.556 [2024-11-17 14:30:53.543714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.556 [2024-11-17 14:30:53.543725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb98150 with addr=10.0.0.2, port=4420 00:21:04.556 [2024-11-17 14:30:53.543732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb98150 is same with the state(6) to be set 00:21:04.556 [2024-11-17 14:30:53.543746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7221b0 (9): Bad file descriptor 00:21:04.556 [2024-11-17 14:30:53.543758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x721d50 (9): Bad file descriptor 00:21:04.556 [2024-11-17 14:30:53.543767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71fc70 (9): Bad file descriptor 00:21:04.556 [2024-11-17 14:30:53.543776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4de30 (9): Bad file descriptor 00:21:04.556 [2024-11-17 14:30:53.544163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.556 [2024-11-17 14:30:53.544179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x636610 with addr=10.0.0.2, port=4420 00:21:04.556 [2024-11-17 14:30:53.544187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x636610 is same with the state(6) to be set 00:21:04.556 [2024-11-17 14:30:53.544324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.556 [2024-11-17 14:30:53.544335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4d5c0 with addr=10.0.0.2, port=4420 00:21:04.556 [2024-11-17 14:30:53.544342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d5c0 is same with the state(6) to be set 00:21:04.556 [2024-11-17 14:30:53.544478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.556 [2024-11-17 14:30:53.544490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb53270 with addr=10.0.0.2, port=4420 00:21:04.556 [2024-11-17 14:30:53.544497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53270 is same with the state(6) to be set 00:21:04.556 [2024-11-17 14:30:53.544644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.556 [2024-11-17 14:30:53.544654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb82990 with addr=10.0.0.2, port=4420 00:21:04.556 [2024-11-17 14:30:53.544661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb82990 is same with the state(6) to be set 00:21:04.556 [2024-11-17 14:30:53.544670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb50ca0 (9): Bad file descriptor 00:21:04.556 [2024-11-17 14:30:53.544679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb98150 (9): Bad file descriptor 00:21:04.556 [2024-11-17 14:30:53.544688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:04.556 [2024-11-17 14:30:53.544695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:04.556 [2024-11-17 14:30:53.544707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:04.556 [2024-11-17 14:30:53.544716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:04.556 [2024-11-17 14:30:53.544724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:04.556 [2024-11-17 14:30:53.544729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:04.556 [2024-11-17 14:30:53.544736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:04.556 [2024-11-17 14:30:53.544742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:04.556 [2024-11-17 14:30:53.544749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:04.556 [2024-11-17 14:30:53.544754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:04.556 [2024-11-17 14:30:53.544761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:04.556 [2024-11-17 14:30:53.544767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:04.556 [2024-11-17 14:30:53.544774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:04.556 [2024-11-17 14:30:53.544780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:04.556 [2024-11-17 14:30:53.544786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:04.556 [2024-11-17 14:30:53.544793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:04.556 [2024-11-17 14:30:53.544830] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:04.556 [2024-11-17 14:30:53.544841] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:21:04.556 [2024-11-17 14:30:53.545402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x636610 (9): Bad file descriptor 00:21:04.556 [2024-11-17 14:30:53.545419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4d5c0 (9): Bad file descriptor 00:21:04.556 [2024-11-17 14:30:53.545428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb53270 (9): Bad file descriptor 00:21:04.556 [2024-11-17 14:30:53.545437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb82990 (9): Bad file descriptor 00:21:04.556 [2024-11-17 14:30:53.545445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:04.557 [2024-11-17 14:30:53.545451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:04.557 [2024-11-17 14:30:53.545458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:04.557 [2024-11-17 14:30:53.545465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:04.557 [2024-11-17 14:30:53.545471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:04.557 [2024-11-17 14:30:53.545478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:04.557 [2024-11-17 14:30:53.545484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:04.557 [2024-11-17 14:30:53.545489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:04.557 [2024-11-17 14:30:53.545530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:04.557 [2024-11-17 14:30:53.545540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:04.557 [2024-11-17 14:30:53.545548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:04.557 [2024-11-17 14:30:53.545556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:04.557 [2024-11-17 14:30:53.545584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:04.557 [2024-11-17 14:30:53.545591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:04.557 [2024-11-17 14:30:53.545597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:04.557 [2024-11-17 14:30:53.545603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:04.557 [2024-11-17 14:30:53.545610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:04.557 [2024-11-17 14:30:53.545615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:04.557 [2024-11-17 14:30:53.545622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:04.557 [2024-11-17 14:30:53.545628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:04.557 [2024-11-17 14:30:53.545634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:04.557 [2024-11-17 14:30:53.545640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:04.557 [2024-11-17 14:30:53.545646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:04.557 [2024-11-17 14:30:53.545653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:04.557 [2024-11-17 14:30:53.545659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:04.557 [2024-11-17 14:30:53.545665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:04.557 [2024-11-17 14:30:53.545671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:04.557 [2024-11-17 14:30:53.545677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:04.557 [2024-11-17 14:30:53.545865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.557 [2024-11-17 14:30:53.545878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4de30 with addr=10.0.0.2, port=4420 00:21:04.557 [2024-11-17 14:30:53.545886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4de30 is same with the state(6) to be set 00:21:04.557 [2024-11-17 14:30:53.546044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.557 [2024-11-17 14:30:53.546054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x71fc70 with addr=10.0.0.2, port=4420 00:21:04.557 [2024-11-17 14:30:53.546061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71fc70 is same with the state(6) to be set 00:21:04.557 [2024-11-17 14:30:53.546279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.557 [2024-11-17 14:30:53.546288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x721d50 with addr=10.0.0.2, port=4420 00:21:04.557 [2024-11-17 14:30:53.546295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x721d50 is same with the state(6) to be set 00:21:04.557 [2024-11-17 14:30:53.546486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.557 [2024-11-17 14:30:53.546498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7221b0 with addr=10.0.0.2, port=4420 00:21:04.557 [2024-11-17 14:30:53.546506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7221b0 is same with the state(6) to be set 00:21:04.557 [2024-11-17 14:30:53.546534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4de30 (9): Bad file descriptor 00:21:04.557 [2024-11-17 14:30:53.546544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71fc70 (9): Bad file descriptor 00:21:04.557 [2024-11-17 14:30:53.546552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x721d50 (9): Bad file descriptor 00:21:04.557 [2024-11-17 14:30:53.546561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7221b0 (9): Bad file descriptor 00:21:04.557 [2024-11-17 14:30:53.546588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:04.557 [2024-11-17 14:30:53.546594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:04.557 [2024-11-17 14:30:53.546601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:04.557 [2024-11-17 14:30:53.546608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:04.557 [2024-11-17 14:30:53.546615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:04.557 [2024-11-17 14:30:53.546621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:04.557 [2024-11-17 14:30:53.546628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:04.557 [2024-11-17 14:30:53.546634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:04.557 [2024-11-17 14:30:53.546640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:04.557 [2024-11-17 14:30:53.546647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:04.557 [2024-11-17 14:30:53.546653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:04.557 [2024-11-17 14:30:53.546659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:04.557 [2024-11-17 14:30:53.546666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:04.557 [2024-11-17 14:30:53.546671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:04.557 [2024-11-17 14:30:53.546678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:04.557 [2024-11-17 14:30:53.546683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:04.817 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:05.759 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1521953 00:21:05.759 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:05.759 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1521953 00:21:05.759 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:05.759 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.759 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:05.759 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.759 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1521953 00:21:05.759 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:05.759 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:05.759 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:05.759 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:05.759 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:05.759 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:05.759 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:05.760 rmmod nvme_tcp 00:21:05.760 rmmod nvme_fabrics 00:21:05.760 rmmod nvme_keyring 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1521673 ']' 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1521673 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1521673 ']' 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1521673 00:21:05.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1521673) - No such process 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1521673 is not found' 00:21:05.760 Process with pid 1521673 is not found 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.760 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.298 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:08.298 00:21:08.298 real 0m7.701s 00:21:08.298 user 0m18.721s 00:21:08.298 sys 0m1.289s 00:21:08.298 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:08.298 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:08.299 ************************************ 00:21:08.299 END TEST nvmf_shutdown_tc3 00:21:08.299 ************************************ 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:08.299 ************************************ 00:21:08.299 START TEST nvmf_shutdown_tc4 00:21:08.299 ************************************ 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:08.299 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:08.299 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:08.299 Found net devices under 0000:86:00.0: cvl_0_0 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:08.299 Found net devices under 0000:86:00.1: cvl_0_1 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:08.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:21:08.299 00:21:08.299 --- 10.0.0.2 ping statistics --- 00:21:08.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.299 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:08.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:21:08.299 00:21:08.299 --- 10.0.0.1 ping statistics --- 00:21:08.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.299 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1522993 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1522993 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1522993 ']' 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.299 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:08.299 [2024-11-17 14:30:57.484388] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:21:08.299 [2024-11-17 14:30:57.484433] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.558 [2024-11-17 14:30:57.566334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:08.558 [2024-11-17 14:30:57.610693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.558 [2024-11-17 14:30:57.610729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.558 [2024-11-17 14:30:57.610741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.558 [2024-11-17 14:30:57.610747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.558 [2024-11-17 14:30:57.610752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.558 [2024-11-17 14:30:57.612396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.558 [2024-11-17 14:30:57.612502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:08.558 [2024-11-17 14:30:57.612607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.558 [2024-11-17 14:30:57.612608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:09.126 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.126 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:09.126 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:09.126 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:09.126 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:09.385 [2024-11-17 14:30:58.355545] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.385 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:09.386 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:09.386 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.386 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:09.386 Malloc1 00:21:09.386 [2024-11-17 14:30:58.468631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.386 Malloc2 00:21:09.386 Malloc3 00:21:09.386 Malloc4 00:21:09.645 Malloc5 00:21:09.645 Malloc6 00:21:09.645 Malloc7 00:21:09.645 Malloc8 00:21:09.645 Malloc9 00:21:09.645 Malloc10 00:21:09.645 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.645 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:09.645 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:09.645 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:09.904 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1523276 00:21:09.904 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:09.904 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:09.904 [2024-11-17 14:30:58.973247] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:15.183 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:15.183 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1522993 00:21:15.183 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1522993 ']' 00:21:15.183 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1522993 00:21:15.183 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:15.183 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:15.183 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1522993 00:21:15.183 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:15.183 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:15.183 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1522993' 00:21:15.183 killing process with pid 1522993 00:21:15.183 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1522993 00:21:15.183 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1522993 00:21:15.183 [2024-11-17 14:31:03.962090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9c4f0 is same with the state(6) to be set 00:21:15.183 [2024-11-17 14:31:03.962140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9c4f0 is same with the state(6) to be set 00:21:15.183 [2024-11-17 14:31:03.962149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9c4f0 is same with the state(6) to be set 00:21:15.183 [2024-11-17 14:31:03.962155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9c4f0 is same with the state(6) to be set 00:21:15.183 [2024-11-17 14:31:03.962162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9c4f0 is same with the state(6) to be set 00:21:15.184 [2024-11-17 14:31:03.962168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9c4f0 is same with the state(6) to be set 00:21:15.184 [2024-11-17 14:31:03.962174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9c4f0 is same with the state(6) to be set 00:21:15.184 [2024-11-17 14:31:03.962180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9c4f0 is same with the state(6) to be set 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 [2024-11-17 14:31:03.974119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:15.184 [2024-11-17 14:31:03.974240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9ac80 is same with the state(6) to be set 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 [2024-11-17 14:31:03.974274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9ac80 is same with the state(6) to be set 00:21:15.184 [2024-11-17 14:31:03.974283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9ac80 is same with Write completed with error (sct=0, sc=8) 00:21:15.184 the state(6) to be set 00:21:15.184 starting I/O failed: -6 00:21:15.184 [2024-11-17 14:31:03.974298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9ac80 is same with the state(6) to be set 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 [2024-11-17 14:31:03.974975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:15.184 [2024-11-17 14:31:03.974977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9b640 is same with the state(6) to be set 00:21:15.184 [2024-11-17 14:31:03.975002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9b640 is same with the state(6) to be set 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 [2024-11-17 14:31:03.975118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9a7b0 is same with the state(6) to be set 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.184 Write completed with error (sct=0, sc=8) 00:21:15.184 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 [2024-11-17 14:31:03.975979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 [2024-11-17 14:31:03.977785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:15.185 NVMe io qpair process completion error 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 [2024-11-17 14:31:03.978755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b98a70 is same with the state(6) to be set 00:21:15.185 [2024-11-17 14:31:03.978759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:15.185 [2024-11-17 14:31:03.978775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b98a70 is same with the state(6) to be set 00:21:15.185 [2024-11-17 14:31:03.978783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b98a70 is same with the state(6) to be set 00:21:15.185 [2024-11-17 14:31:03.978790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b98a70 is same with the state(6) to be set 00:21:15.185 [2024-11-17 14:31:03.978797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b98a70 is same with the state(6) to be set 00:21:15.185 [2024-11-17 14:31:03.978804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b98a70 is same with the state(6) to be set 00:21:15.185 [2024-11-17 14:31:03.978810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b98a70 is same with the state(6) to be set 00:21:15.185 [2024-11-17 14:31:03.978817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b98a70 is same with the state(6) to be set 00:21:15.185 [2024-11-17 14:31:03.978822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b98a70 is same with the state(6) to be set 00:21:15.185 [2024-11-17 14:31:03.978828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b98a70 is same with the state(6) to be set 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 Write completed with error (sct=0, sc=8) 00:21:15.185 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 [2024-11-17 14:31:03.979530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 [2024-11-17 14:31:03.980299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9a2c0 is same with the state(6) to be set 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 [2024-11-17 14:31:03.980317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9a2c0 is same with the state(6) to be set 00:21:15.186 [2024-11-17 14:31:03.980324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9a2c0 is same with the state(6) to be set 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 [2024-11-17 14:31:03.980331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9a2c0 is same with the state(6) to be set 00:21:15.186 starting I/O failed: -6 00:21:15.186 [2024-11-17 14:31:03.980338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9a2c0 is same with the state(6) to be set 00:21:15.186 [2024-11-17 14:31:03.980345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9a2c0 is same with the state(6) to be set 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 [2024-11-17 14:31:03.980359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9a2c0 is same with the state(6) to be set 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 [2024-11-17 14:31:03.980585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.186 starting I/O failed: -6 00:21:15.186 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 [2024-11-17 14:31:03.982188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:15.187 NVMe io qpair process completion error 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 [2024-11-17 14:31:03.983094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:15.187 starting I/O failed: -6 00:21:15.187 starting I/O failed: -6 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 [2024-11-17 14:31:03.984029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:15.187 starting I/O failed: -6 00:21:15.187 starting I/O failed: -6 00:21:15.187 starting I/O failed: -6 00:21:15.187 starting I/O failed: -6 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.187 starting I/O failed: -6 00:21:15.187 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 [2024-11-17 14:31:03.985242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 [2024-11-17 14:31:03.987184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:15.188 NVMe io qpair process completion error 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 starting I/O failed: -6 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.188 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 [2024-11-17 14:31:03.988202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 [2024-11-17 14:31:03.989095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 [2024-11-17 14:31:03.990109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.189 starting I/O failed: -6 00:21:15.189 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 [2024-11-17 14:31:03.992162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:15.190 NVMe io qpair process completion error 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 [2024-11-17 14:31:03.993175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 Write completed with error (sct=0, sc=8) 00:21:15.190 starting I/O failed: -6 00:21:15.190 [2024-11-17 14:31:03.994019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:15.190 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 [2024-11-17 14:31:03.995049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 [2024-11-17 14:31:03.998986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:15.191 NVMe io qpair process completion error 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 Write completed with error (sct=0, sc=8) 00:21:15.191 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 [2024-11-17 14:31:04.000122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 [2024-11-17 14:31:04.001053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 [2024-11-17 14:31:04.002088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.192 Write completed with error (sct=0, sc=8) 00:21:15.192 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 [2024-11-17 14:31:04.005621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:15.193 NVMe io qpair process completion error 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 [2024-11-17 14:31:04.006603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.193 starting I/O failed: -6 00:21:15.193 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 [2024-11-17 14:31:04.007526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 [2024-11-17 14:31:04.008555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.194 Write completed with error (sct=0, sc=8) 00:21:15.194 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 [2024-11-17 14:31:04.010295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:15.195 NVMe io qpair process completion error 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 [2024-11-17 14:31:04.011195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 [2024-11-17 14:31:04.012113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 [2024-11-17 14:31:04.013148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.195 Write completed with error (sct=0, sc=8) 00:21:15.195 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 [2024-11-17 14:31:04.014698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:15.196 NVMe io qpair process completion error 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 [2024-11-17 14:31:04.015671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.196 Write completed with error (sct=0, sc=8) 00:21:15.196 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 [2024-11-17 14:31:04.016538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 [2024-11-17 14:31:04.017563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:15.197 starting I/O failed: -6 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 Write completed with error (sct=0, sc=8) 00:21:15.197 starting I/O failed: -6 00:21:15.197 [2024-11-17 14:31:04.021769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:15.197 NVMe io qpair process completion error 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.198 Write completed with error (sct=0, sc=8) 00:21:15.198 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Write completed with error (sct=0, sc=8) 00:21:15.199 starting I/O failed: -6 00:21:15.199 Initializing NVMe Controllers 00:21:15.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:15.199 Controller IO queue size 128, less than required. 00:21:15.199 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:15.199 Controller IO queue size 128, less than required. 00:21:15.199 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:15.199 Controller IO queue size 128, less than required. 00:21:15.199 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:15.199 Controller IO queue size 128, less than required. 00:21:15.199 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:15.199 Controller IO queue size 128, less than required. 00:21:15.199 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:15.199 Controller IO queue size 128, less than required. 00:21:15.199 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:15.199 Controller IO queue size 128, less than required. 00:21:15.199 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:15.199 Controller IO queue size 128, less than required. 00:21:15.199 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:15.199 Controller IO queue size 128, less than required. 00:21:15.199 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:15.199 Controller IO queue size 128, less than required. 00:21:15.199 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:15.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:15.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:15.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:15.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:15.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:15.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:15.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:15.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:15.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:15.199 Initialization complete. Launching workers. 00:21:15.199 ======================================================== 00:21:15.199 Latency(us) 00:21:15.199 Device Information : IOPS MiB/s Average min max 00:21:15.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2128.86 91.47 60129.63 865.83 99322.07 00:21:15.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2151.38 92.44 59511.69 860.80 113931.49 00:21:15.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2155.45 92.62 59434.25 817.76 111816.27 00:21:15.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2149.66 92.37 58934.19 885.49 108431.40 00:21:15.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2169.39 93.22 58409.13 780.18 108694.95 00:21:15.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2151.38 92.44 58910.73 675.69 106722.82 00:21:15.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2169.82 93.23 58423.76 838.14 106933.80 00:21:15.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2153.52 92.53 58882.30 751.24 106376.87 00:21:15.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2114.71 90.87 60005.89 855.54 108357.03 00:21:15.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2138.30 91.88 59377.77 937.77 104357.06 00:21:15.199 ======================================================== 00:21:15.200 Total : 21482.47 923.07 59198.18 675.69 113931.49 00:21:15.200 00:21:15.200 [2024-11-17 14:31:04.030205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ef0 is same with the state(6) to be set 00:21:15.200 [2024-11-17 14:31:04.030257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff2410 is same with the state(6) to be set 00:21:15.200 [2024-11-17 14:31:04.030289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1560 is same with the state(6) to be set 00:21:15.200 [2024-11-17 14:31:04.030320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3720 is same with the state(6) to be set 00:21:15.200 [2024-11-17 14:31:04.030348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3900 is same with the state(6) to be set 00:21:15.200 [2024-11-17 14:31:04.030384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3ae0 is same with the state(6) to be set 00:21:15.200 [2024-11-17 14:31:04.030413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1890 is same with the state(6) to be set 00:21:15.200 [2024-11-17 14:31:04.030442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff2a70 is same with the state(6) to be set 00:21:15.200 [2024-11-17 14:31:04.030469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff2740 is same with the state(6) to be set 00:21:15.200 [2024-11-17 14:31:04.030497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1bc0 is same with the state(6) to be set 00:21:15.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:15.200 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:16.138 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1523276 00:21:16.138 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:16.138 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1523276 00:21:16.138 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:16.138 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:16.138 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:16.138 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:16.138 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1523276 00:21:16.138 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:16.138 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:16.138 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:16.138 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:16.138 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:16.138 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:16.138 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:16.138 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:16.398 rmmod nvme_tcp 00:21:16.398 rmmod nvme_fabrics 00:21:16.398 rmmod nvme_keyring 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1522993 ']' 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1522993 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1522993 ']' 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1522993 00:21:16.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1522993) - No such process 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1522993 is not found' 00:21:16.398 Process with pid 1522993 is not found 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.398 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.304 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:18.304 00:21:18.304 real 0m10.399s 00:21:18.304 user 0m27.567s 00:21:18.304 sys 0m5.127s 00:21:18.304 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:18.305 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:18.305 ************************************ 00:21:18.305 END TEST nvmf_shutdown_tc4 00:21:18.305 ************************************ 00:21:18.564 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:18.564 00:21:18.564 real 0m41.315s 00:21:18.564 user 1m42.434s 00:21:18.564 sys 0m13.885s 00:21:18.564 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:18.564 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:18.564 ************************************ 00:21:18.564 END TEST nvmf_shutdown 00:21:18.564 ************************************ 00:21:18.564 14:31:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:18.564 14:31:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:18.564 14:31:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:18.564 14:31:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:18.564 ************************************ 00:21:18.564 START TEST nvmf_nsid 00:21:18.564 ************************************ 00:21:18.564 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:18.564 * Looking for test storage... 00:21:18.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:18.564 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:18.564 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:18.564 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:18.564 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:18.564 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:18.564 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:18.564 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:18.564 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:18.564 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:18.564 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:18.824 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:18.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.824 --rc genhtml_branch_coverage=1 00:21:18.824 --rc genhtml_function_coverage=1 00:21:18.825 --rc genhtml_legend=1 00:21:18.825 --rc geninfo_all_blocks=1 00:21:18.825 --rc geninfo_unexecuted_blocks=1 00:21:18.825 00:21:18.825 ' 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:18.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.825 --rc genhtml_branch_coverage=1 00:21:18.825 --rc genhtml_function_coverage=1 00:21:18.825 --rc genhtml_legend=1 00:21:18.825 --rc geninfo_all_blocks=1 00:21:18.825 --rc geninfo_unexecuted_blocks=1 00:21:18.825 00:21:18.825 ' 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:18.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.825 --rc genhtml_branch_coverage=1 00:21:18.825 --rc genhtml_function_coverage=1 00:21:18.825 --rc genhtml_legend=1 00:21:18.825 --rc geninfo_all_blocks=1 00:21:18.825 --rc geninfo_unexecuted_blocks=1 00:21:18.825 00:21:18.825 ' 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:18.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.825 --rc genhtml_branch_coverage=1 00:21:18.825 --rc genhtml_function_coverage=1 00:21:18.825 --rc genhtml_legend=1 00:21:18.825 --rc geninfo_all_blocks=1 00:21:18.825 --rc geninfo_unexecuted_blocks=1 00:21:18.825 00:21:18.825 ' 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:18.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:18.825 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:25.392 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:25.392 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.392 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:25.393 Found net devices under 0000:86:00.0: cvl_0_0 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:25.393 Found net devices under 0000:86:00.1: cvl_0_1 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:25.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:25.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:21:25.393 00:21:25.393 --- 10.0.0.2 ping statistics --- 00:21:25.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.393 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:25.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:25.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:21:25.393 00:21:25.393 --- 10.0.0.1 ping statistics --- 00:21:25.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.393 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1527867 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1527867 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1527867 ']' 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.393 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:25.393 [2024-11-17 14:31:13.801438] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:21:25.393 [2024-11-17 14:31:13.801483] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.393 [2024-11-17 14:31:13.882241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.393 [2024-11-17 14:31:13.923273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.393 [2024-11-17 14:31:13.923310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.393 [2024-11-17 14:31:13.923318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.393 [2024-11-17 14:31:13.923323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.393 [2024-11-17 14:31:13.923328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.393 [2024-11-17 14:31:13.923929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.393 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1527978 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=adb10765-2f3b-4fec-938e-eb94d8ff01ae 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=49c33dd0-0efa-4abd-8904-fdaef244ed8a 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=63395ead-8267-4520-90aa-22e023f8f8ae 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:25.394 null0 00:21:25.394 null1 00:21:25.394 null2 00:21:25.394 [2024-11-17 14:31:14.106816] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:21:25.394 [2024-11-17 14:31:14.106861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527978 ] 00:21:25.394 [2024-11-17 14:31:14.110434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.394 [2024-11-17 14:31:14.134618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1527978 /var/tmp/tgt2.sock 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1527978 ']' 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:25.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:25.394 [2024-11-17 14:31:14.182361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.394 [2024-11-17 14:31:14.229012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:25.394 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:25.653 [2024-11-17 14:31:14.737796] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.653 [2024-11-17 14:31:14.753905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:25.653 nvme0n1 nvme0n2 00:21:25.653 nvme1n1 00:21:25.653 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:25.653 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:25.653 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:27.030 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:27.030 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:27.030 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:27.030 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:27.030 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:27.030 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:27.030 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:27.030 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:27.030 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:27.030 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:27.030 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:27.030 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:27.030 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:27.965 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:27.965 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:27.965 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:27.965 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:27.965 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:27.965 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid adb10765-2f3b-4fec-938e-eb94d8ff01ae 00:21:27.965 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:27.965 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:27.965 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:27.965 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:27.965 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:27.965 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=adb107652f3b4fec938eeb94d8ff01ae 00:21:27.965 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo ADB107652F3B4FEC938EEB94D8FF01AE 00:21:27.965 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ ADB107652F3B4FEC938EEB94D8FF01AE == \A\D\B\1\0\7\6\5\2\F\3\B\4\F\E\C\9\3\8\E\E\B\9\4\D\8\F\F\0\1\A\E ]] 00:21:27.965 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:27.965 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:27.965 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:27.965 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:27.965 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:27.966 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:27.966 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:27.966 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 49c33dd0-0efa-4abd-8904-fdaef244ed8a 00:21:27.966 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:27.966 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:27.966 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:27.966 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:27.966 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:27.966 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=49c33dd00efa4abd8904fdaef244ed8a 00:21:27.966 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 49C33DD00EFA4ABD8904FDAEF244ED8A 00:21:27.966 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 49C33DD00EFA4ABD8904FDAEF244ED8A == \4\9\C\3\3\D\D\0\0\E\F\A\4\A\B\D\8\9\0\4\F\D\A\E\F\2\4\4\E\D\8\A ]] 00:21:27.966 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:27.966 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:27.966 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:27.966 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:27.966 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:27.966 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:27.966 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:27.966 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 63395ead-8267-4520-90aa-22e023f8f8ae 00:21:27.966 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:27.966 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:27.966 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:27.966 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:27.966 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:27.966 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=63395ead8267452090aa22e023f8f8ae 00:21:27.966 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 63395EAD8267452090AA22E023F8F8AE 00:21:27.966 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 63395EAD8267452090AA22E023F8F8AE == \6\3\3\9\5\E\A\D\8\2\6\7\4\5\2\0\9\0\A\A\2\2\E\0\2\3\F\8\F\8\A\E ]] 00:21:27.966 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:28.224 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:28.224 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:28.224 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1527978 00:21:28.224 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1527978 ']' 00:21:28.224 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1527978 00:21:28.224 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:28.224 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:28.224 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1527978 00:21:28.224 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:28.224 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:28.224 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1527978' 00:21:28.224 killing process with pid 1527978 00:21:28.224 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1527978 00:21:28.224 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1527978 00:21:28.483 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:28.483 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:28.483 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:28.483 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:28.483 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:28.483 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:28.483 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:28.483 rmmod nvme_tcp 00:21:28.483 rmmod nvme_fabrics 00:21:28.483 rmmod nvme_keyring 00:21:28.483 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:28.483 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:28.483 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:28.483 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1527867 ']' 00:21:28.483 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1527867 00:21:28.483 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1527867 ']' 00:21:28.483 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1527867 00:21:28.483 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:28.483 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:28.483 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1527867 00:21:28.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:28.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:28.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1527867' 00:21:28.743 killing process with pid 1527867 00:21:28.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1527867 00:21:28.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1527867 00:21:28.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:28.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:28.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:28.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:28.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:28.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:28.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:28.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:28.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:28.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:28.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.289 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:31.289 00:21:31.289 real 0m12.340s 00:21:31.289 user 0m9.611s 00:21:31.289 sys 0m5.487s 00:21:31.289 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:31.289 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:31.289 ************************************ 00:21:31.289 END TEST nvmf_nsid 00:21:31.289 ************************************ 00:21:31.289 14:31:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:31.289 00:21:31.289 real 12m0.911s 00:21:31.289 user 25m44.050s 00:21:31.289 sys 3m43.410s 00:21:31.289 14:31:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:31.289 14:31:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:31.289 ************************************ 00:21:31.289 END TEST nvmf_target_extra 00:21:31.289 ************************************ 00:21:31.289 14:31:20 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:31.289 14:31:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:31.289 14:31:20 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.289 14:31:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:31.289 ************************************ 00:21:31.289 START TEST nvmf_host 00:21:31.289 ************************************ 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:31.289 * Looking for test storage... 00:21:31.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:31.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.289 --rc genhtml_branch_coverage=1 00:21:31.289 --rc genhtml_function_coverage=1 00:21:31.289 --rc genhtml_legend=1 00:21:31.289 --rc geninfo_all_blocks=1 00:21:31.289 --rc geninfo_unexecuted_blocks=1 00:21:31.289 00:21:31.289 ' 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:31.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.289 --rc genhtml_branch_coverage=1 00:21:31.289 --rc genhtml_function_coverage=1 00:21:31.289 --rc genhtml_legend=1 00:21:31.289 --rc geninfo_all_blocks=1 00:21:31.289 --rc geninfo_unexecuted_blocks=1 00:21:31.289 00:21:31.289 ' 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:31.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.289 --rc genhtml_branch_coverage=1 00:21:31.289 --rc genhtml_function_coverage=1 00:21:31.289 --rc genhtml_legend=1 00:21:31.289 --rc geninfo_all_blocks=1 00:21:31.289 --rc geninfo_unexecuted_blocks=1 00:21:31.289 00:21:31.289 ' 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:31.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.289 --rc genhtml_branch_coverage=1 00:21:31.289 --rc genhtml_function_coverage=1 00:21:31.289 --rc genhtml_legend=1 00:21:31.289 --rc geninfo_all_blocks=1 00:21:31.289 --rc geninfo_unexecuted_blocks=1 00:21:31.289 00:21:31.289 ' 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.289 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:31.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.290 ************************************ 00:21:31.290 START TEST nvmf_multicontroller 00:21:31.290 ************************************ 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:31.290 * Looking for test storage... 00:21:31.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:31.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.290 --rc genhtml_branch_coverage=1 00:21:31.290 --rc genhtml_function_coverage=1 00:21:31.290 --rc genhtml_legend=1 00:21:31.290 --rc geninfo_all_blocks=1 00:21:31.290 --rc geninfo_unexecuted_blocks=1 00:21:31.290 00:21:31.290 ' 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:31.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.290 --rc genhtml_branch_coverage=1 00:21:31.290 --rc genhtml_function_coverage=1 00:21:31.290 --rc genhtml_legend=1 00:21:31.290 --rc geninfo_all_blocks=1 00:21:31.290 --rc geninfo_unexecuted_blocks=1 00:21:31.290 00:21:31.290 ' 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:31.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.290 --rc genhtml_branch_coverage=1 00:21:31.290 --rc genhtml_function_coverage=1 00:21:31.290 --rc genhtml_legend=1 00:21:31.290 --rc geninfo_all_blocks=1 00:21:31.290 --rc geninfo_unexecuted_blocks=1 00:21:31.290 00:21:31.290 ' 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:31.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.290 --rc genhtml_branch_coverage=1 00:21:31.290 --rc genhtml_function_coverage=1 00:21:31.290 --rc genhtml_legend=1 00:21:31.290 --rc geninfo_all_blocks=1 00:21:31.290 --rc geninfo_unexecuted_blocks=1 00:21:31.290 00:21:31.290 ' 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.290 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:31.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:31.291 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:31.551 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:31.551 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:31.551 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:31.551 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:31.551 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:31.551 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:31.551 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:31.551 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:31.551 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.551 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:31.551 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:31.551 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:31.551 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.551 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.551 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.551 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:31.551 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:31.551 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:31.551 14:31:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.968 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:36.968 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:36.968 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:36.968 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:36.968 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:36.968 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:36.968 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:36.969 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:36.969 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:36.969 Found net devices under 0000:86:00.0: cvl_0_0 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:36.969 Found net devices under 0000:86:00.1: cvl_0_1 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:36.969 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:37.229 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:37.229 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:37.229 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:37.229 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:37.229 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:37.229 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:37.229 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:37.229 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:37.229 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:37.229 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:37.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:21:37.229 00:21:37.229 --- 10.0.0.2 ping statistics --- 00:21:37.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.229 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:21:37.229 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:37.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:37.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:21:37.229 00:21:37.229 --- 10.0.0.1 ping statistics --- 00:21:37.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.229 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:21:37.230 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:37.230 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:37.230 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:37.230 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:37.230 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:37.230 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:37.230 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:37.230 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:37.230 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:37.489 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:37.489 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:37.489 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:37.489 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.489 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1532090 00:21:37.489 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:37.489 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1532090 00:21:37.489 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1532090 ']' 00:21:37.489 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.489 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.489 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.489 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.489 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.489 [2024-11-17 14:31:26.519599] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:21:37.489 [2024-11-17 14:31:26.519651] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.489 [2024-11-17 14:31:26.600038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:37.489 [2024-11-17 14:31:26.641924] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.489 [2024-11-17 14:31:26.641961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.489 [2024-11-17 14:31:26.641969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.489 [2024-11-17 14:31:26.641975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.489 [2024-11-17 14:31:26.641981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.489 [2024-11-17 14:31:26.643400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.489 [2024-11-17 14:31:26.643507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.489 [2024-11-17 14:31:26.643508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.749 [2024-11-17 14:31:26.787912] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.749 Malloc0 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.749 [2024-11-17 14:31:26.847304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.749 [2024-11-17 14:31:26.855225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.749 Malloc1 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1532312 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1532312 /var/tmp/bdevperf.sock 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1532312 ']' 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:37.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.749 14:31:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.009 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:38.009 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:38.009 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:38.009 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.009 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.270 NVMe0n1 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.270 1 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.270 request: 00:21:38.270 { 00:21:38.270 "name": "NVMe0", 00:21:38.270 "trtype": "tcp", 00:21:38.270 "traddr": "10.0.0.2", 00:21:38.270 "adrfam": "ipv4", 00:21:38.270 "trsvcid": "4420", 00:21:38.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.270 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:38.270 "hostaddr": "10.0.0.1", 00:21:38.270 "prchk_reftag": false, 00:21:38.270 "prchk_guard": false, 00:21:38.270 "hdgst": false, 00:21:38.270 "ddgst": false, 00:21:38.270 "allow_unrecognized_csi": false, 00:21:38.270 "method": "bdev_nvme_attach_controller", 00:21:38.270 "req_id": 1 00:21:38.270 } 00:21:38.270 Got JSON-RPC error response 00:21:38.270 response: 00:21:38.270 { 00:21:38.270 "code": -114, 00:21:38.270 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:38.270 } 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.270 request: 00:21:38.270 { 00:21:38.270 "name": "NVMe0", 00:21:38.270 "trtype": "tcp", 00:21:38.270 "traddr": "10.0.0.2", 00:21:38.270 "adrfam": "ipv4", 00:21:38.270 "trsvcid": "4420", 00:21:38.270 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:38.270 "hostaddr": "10.0.0.1", 00:21:38.270 "prchk_reftag": false, 00:21:38.270 "prchk_guard": false, 00:21:38.270 "hdgst": false, 00:21:38.270 "ddgst": false, 00:21:38.270 "allow_unrecognized_csi": false, 00:21:38.270 "method": "bdev_nvme_attach_controller", 00:21:38.270 "req_id": 1 00:21:38.270 } 00:21:38.270 Got JSON-RPC error response 00:21:38.270 response: 00:21:38.270 { 00:21:38.270 "code": -114, 00:21:38.270 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:38.270 } 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.270 request: 00:21:38.270 { 00:21:38.270 "name": "NVMe0", 00:21:38.270 "trtype": "tcp", 00:21:38.270 "traddr": "10.0.0.2", 00:21:38.270 "adrfam": "ipv4", 00:21:38.270 "trsvcid": "4420", 00:21:38.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.270 "hostaddr": "10.0.0.1", 00:21:38.270 "prchk_reftag": false, 00:21:38.270 "prchk_guard": false, 00:21:38.270 "hdgst": false, 00:21:38.270 "ddgst": false, 00:21:38.270 "multipath": "disable", 00:21:38.270 "allow_unrecognized_csi": false, 00:21:38.270 "method": "bdev_nvme_attach_controller", 00:21:38.270 "req_id": 1 00:21:38.270 } 00:21:38.270 Got JSON-RPC error response 00:21:38.270 response: 00:21:38.270 { 00:21:38.270 "code": -114, 00:21:38.270 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:38.270 } 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.270 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.271 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.271 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:38.271 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:38.271 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:38.271 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:38.271 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.271 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:38.271 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.271 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:38.271 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.271 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.271 request: 00:21:38.271 { 00:21:38.271 "name": "NVMe0", 00:21:38.271 "trtype": "tcp", 00:21:38.271 "traddr": "10.0.0.2", 00:21:38.271 "adrfam": "ipv4", 00:21:38.271 "trsvcid": "4420", 00:21:38.271 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.271 "hostaddr": "10.0.0.1", 00:21:38.271 "prchk_reftag": false, 00:21:38.271 "prchk_guard": false, 00:21:38.271 "hdgst": false, 00:21:38.271 "ddgst": false, 00:21:38.271 "multipath": "failover", 00:21:38.271 "allow_unrecognized_csi": false, 00:21:38.271 "method": "bdev_nvme_attach_controller", 00:21:38.271 "req_id": 1 00:21:38.271 } 00:21:38.271 Got JSON-RPC error response 00:21:38.271 response: 00:21:38.271 { 00:21:38.271 "code": -114, 00:21:38.271 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:38.271 } 00:21:38.271 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:38.271 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:38.271 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.271 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.271 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.271 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:38.271 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.271 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.530 NVMe0n1 00:21:38.530 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.530 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:38.530 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.530 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.530 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.530 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:38.530 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.530 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.789 00:21:38.789 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.789 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:38.789 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:38.789 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.789 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.789 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.789 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:38.789 14:31:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:39.726 { 00:21:39.726 "results": [ 00:21:39.726 { 00:21:39.726 "job": "NVMe0n1", 00:21:39.726 "core_mask": "0x1", 00:21:39.726 "workload": "write", 00:21:39.726 "status": "finished", 00:21:39.726 "queue_depth": 128, 00:21:39.726 "io_size": 4096, 00:21:39.726 "runtime": 1.004337, 00:21:39.726 "iops": 24284.67735431434, 00:21:39.726 "mibps": 94.86202091529039, 00:21:39.726 "io_failed": 0, 00:21:39.726 "io_timeout": 0, 00:21:39.726 "avg_latency_us": 5264.557103588428, 00:21:39.726 "min_latency_us": 3234.0591304347827, 00:21:39.726 "max_latency_us": 11568.528695652174 00:21:39.726 } 00:21:39.726 ], 00:21:39.726 "core_count": 1 00:21:39.726 } 00:21:39.726 14:31:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:39.726 14:31:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.726 14:31:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.726 14:31:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.726 14:31:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:39.726 14:31:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1532312 00:21:39.726 14:31:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1532312 ']' 00:21:39.726 14:31:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1532312 00:21:39.726 14:31:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:39.986 14:31:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:39.986 14:31:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1532312 00:21:39.986 14:31:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:39.986 14:31:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:39.986 14:31:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1532312' 00:21:39.986 killing process with pid 1532312 00:21:39.986 14:31:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1532312 00:21:39.986 14:31:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1532312 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:39.986 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:39.986 [2024-11-17 14:31:26.959110] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:21:39.986 [2024-11-17 14:31:26.959159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1532312 ] 00:21:39.986 [2024-11-17 14:31:27.037663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.986 [2024-11-17 14:31:27.079566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.986 [2024-11-17 14:31:27.780708] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name d8d76438-9b79-4197-b2f8-efc9f07a97e6 already exists 00:21:39.986 [2024-11-17 14:31:27.780737] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:d8d76438-9b79-4197-b2f8-efc9f07a97e6 alias for bdev NVMe1n1 00:21:39.986 [2024-11-17 14:31:27.780745] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:39.986 Running I/O for 1 seconds... 00:21:39.986 24262.00 IOPS, 94.77 MiB/s 00:21:39.986 Latency(us) 00:21:39.986 [2024-11-17T13:31:29.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.986 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:39.986 NVMe0n1 : 1.00 24284.68 94.86 0.00 0.00 5264.56 3234.06 11568.53 00:21:39.986 [2024-11-17T13:31:29.211Z] =================================================================================================================== 00:21:39.986 [2024-11-17T13:31:29.211Z] Total : 24284.68 94.86 0.00 0.00 5264.56 3234.06 11568.53 00:21:39.986 Received shutdown signal, test time was about 1.000000 seconds 00:21:39.986 00:21:39.986 Latency(us) 00:21:39.986 [2024-11-17T13:31:29.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.986 [2024-11-17T13:31:29.211Z] =================================================================================================================== 00:21:39.986 [2024-11-17T13:31:29.211Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:39.986 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:39.986 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:39.986 rmmod nvme_tcp 00:21:40.245 rmmod nvme_fabrics 00:21:40.245 rmmod nvme_keyring 00:21:40.245 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:40.245 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:40.245 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:40.245 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1532090 ']' 00:21:40.246 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1532090 00:21:40.246 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1532090 ']' 00:21:40.246 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1532090 00:21:40.246 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:40.246 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:40.246 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1532090 00:21:40.246 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:40.246 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:40.246 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1532090' 00:21:40.246 killing process with pid 1532090 00:21:40.246 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1532090 00:21:40.246 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1532090 00:21:40.505 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:40.505 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:40.505 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:40.505 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:40.505 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:40.505 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:40.505 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:40.505 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:40.505 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:40.505 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.505 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.505 14:31:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.411 14:31:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:42.411 00:21:42.411 real 0m11.273s 00:21:42.411 user 0m12.674s 00:21:42.411 sys 0m5.194s 00:21:42.411 14:31:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:42.411 14:31:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.411 ************************************ 00:21:42.411 END TEST nvmf_multicontroller 00:21:42.411 ************************************ 00:21:42.411 14:31:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:42.411 14:31:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:42.411 14:31:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:42.411 14:31:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.671 ************************************ 00:21:42.671 START TEST nvmf_aer 00:21:42.671 ************************************ 00:21:42.671 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:42.671 * Looking for test storage... 00:21:42.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:42.671 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:42.671 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:42.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.672 --rc genhtml_branch_coverage=1 00:21:42.672 --rc genhtml_function_coverage=1 00:21:42.672 --rc genhtml_legend=1 00:21:42.672 --rc geninfo_all_blocks=1 00:21:42.672 --rc geninfo_unexecuted_blocks=1 00:21:42.672 00:21:42.672 ' 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:42.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.672 --rc genhtml_branch_coverage=1 00:21:42.672 --rc genhtml_function_coverage=1 00:21:42.672 --rc genhtml_legend=1 00:21:42.672 --rc geninfo_all_blocks=1 00:21:42.672 --rc geninfo_unexecuted_blocks=1 00:21:42.672 00:21:42.672 ' 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:42.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.672 --rc genhtml_branch_coverage=1 00:21:42.672 --rc genhtml_function_coverage=1 00:21:42.672 --rc genhtml_legend=1 00:21:42.672 --rc geninfo_all_blocks=1 00:21:42.672 --rc geninfo_unexecuted_blocks=1 00:21:42.672 00:21:42.672 ' 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:42.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.672 --rc genhtml_branch_coverage=1 00:21:42.672 --rc genhtml_function_coverage=1 00:21:42.672 --rc genhtml_legend=1 00:21:42.672 --rc geninfo_all_blocks=1 00:21:42.672 --rc geninfo_unexecuted_blocks=1 00:21:42.672 00:21:42.672 ' 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:42.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.672 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:42.673 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:42.673 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:42.673 14:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:49.245 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:49.245 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:49.245 Found net devices under 0000:86:00.0: cvl_0_0 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.245 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:49.246 Found net devices under 0000:86:00.1: cvl_0_1 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:49.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:21:49.246 00:21:49.246 --- 10.0.0.2 ping statistics --- 00:21:49.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.246 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:21:49.246 00:21:49.246 --- 10.0.0.1 ping statistics --- 00:21:49.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.246 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1536093 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1536093 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1536093 ']' 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.246 14:31:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.246 [2024-11-17 14:31:37.892716] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:21:49.246 [2024-11-17 14:31:37.892758] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.246 [2024-11-17 14:31:37.972916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.246 [2024-11-17 14:31:38.016480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.246 [2024-11-17 14:31:38.016517] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.246 [2024-11-17 14:31:38.016526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.246 [2024-11-17 14:31:38.016532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.246 [2024-11-17 14:31:38.016537] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.246 [2024-11-17 14:31:38.017997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.246 [2024-11-17 14:31:38.018106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.246 [2024-11-17 14:31:38.018214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.246 [2024-11-17 14:31:38.018215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.815 [2024-11-17 14:31:38.773304] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.815 Malloc0 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.815 [2024-11-17 14:31:38.832973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.815 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.815 [ 00:21:49.815 { 00:21:49.815 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:49.815 "subtype": "Discovery", 00:21:49.815 "listen_addresses": [], 00:21:49.815 "allow_any_host": true, 00:21:49.815 "hosts": [] 00:21:49.815 }, 00:21:49.815 { 00:21:49.815 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.816 "subtype": "NVMe", 00:21:49.816 "listen_addresses": [ 00:21:49.816 { 00:21:49.816 "trtype": "TCP", 00:21:49.816 "adrfam": "IPv4", 00:21:49.816 "traddr": "10.0.0.2", 00:21:49.816 "trsvcid": "4420" 00:21:49.816 } 00:21:49.816 ], 00:21:49.816 "allow_any_host": true, 00:21:49.816 "hosts": [], 00:21:49.816 "serial_number": "SPDK00000000000001", 00:21:49.816 "model_number": "SPDK bdev Controller", 00:21:49.816 "max_namespaces": 2, 00:21:49.816 "min_cntlid": 1, 00:21:49.816 "max_cntlid": 65519, 00:21:49.816 "namespaces": [ 00:21:49.816 { 00:21:49.816 "nsid": 1, 00:21:49.816 "bdev_name": "Malloc0", 00:21:49.816 "name": "Malloc0", 00:21:49.816 "nguid": "69A0AE3E9EA74CBDBB3F68999095372F", 00:21:49.816 "uuid": "69a0ae3e-9ea7-4cbd-bb3f-68999095372f" 00:21:49.816 } 00:21:49.816 ] 00:21:49.816 } 00:21:49.816 ] 00:21:49.816 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.816 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:49.816 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:49.816 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1536341 00:21:49.816 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:49.816 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:49.816 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:49.816 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:49.816 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:49.816 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:49.816 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:49.816 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:49.816 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:49.816 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:49.816 14:31:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.076 Malloc1 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.076 [ 00:21:50.076 { 00:21:50.076 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:50.076 "subtype": "Discovery", 00:21:50.076 "listen_addresses": [], 00:21:50.076 "allow_any_host": true, 00:21:50.076 "hosts": [] 00:21:50.076 }, 00:21:50.076 { 00:21:50.076 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.076 "subtype": "NVMe", 00:21:50.076 "listen_addresses": [ 00:21:50.076 { 00:21:50.076 "trtype": "TCP", 00:21:50.076 "adrfam": "IPv4", 00:21:50.076 "traddr": "10.0.0.2", 00:21:50.076 "trsvcid": "4420" 00:21:50.076 } 00:21:50.076 ], 00:21:50.076 "allow_any_host": true, 00:21:50.076 "hosts": [], 00:21:50.076 "serial_number": "SPDK00000000000001", 00:21:50.076 "model_number": "SPDK bdev Controller", 00:21:50.076 "max_namespaces": 2, 00:21:50.076 "min_cntlid": 1, 00:21:50.076 "max_cntlid": 65519, 00:21:50.076 "namespaces": [ 00:21:50.076 { 00:21:50.076 "nsid": 1, 00:21:50.076 "bdev_name": "Malloc0", 00:21:50.076 "name": "Malloc0", 00:21:50.076 "nguid": "69A0AE3E9EA74CBDBB3F68999095372F", 00:21:50.076 "uuid": "69a0ae3e-9ea7-4cbd-bb3f-68999095372f" 00:21:50.076 }, 00:21:50.076 { 00:21:50.076 "nsid": 2, 00:21:50.076 "bdev_name": "Malloc1", 00:21:50.076 "name": "Malloc1", 00:21:50.076 "nguid": "66673E8D2736471CAC371E25B0F24394", 00:21:50.076 "uuid": "66673e8d-2736-471c-ac37-1e25b0f24394" 00:21:50.076 } 00:21:50.076 ] 00:21:50.076 } 00:21:50.076 ] 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.076 Asynchronous Event Request test 00:21:50.076 Attaching to 10.0.0.2 00:21:50.076 Attached to 10.0.0.2 00:21:50.076 Registering asynchronous event callbacks... 00:21:50.076 Starting namespace attribute notice tests for all controllers... 00:21:50.076 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:50.076 aer_cb - Changed Namespace 00:21:50.076 Cleaning up... 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1536341 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.076 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:50.336 rmmod nvme_tcp 00:21:50.336 rmmod nvme_fabrics 00:21:50.336 rmmod nvme_keyring 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1536093 ']' 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1536093 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1536093 ']' 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1536093 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1536093 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1536093' 00:21:50.336 killing process with pid 1536093 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1536093 00:21:50.336 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1536093 00:21:50.595 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:50.595 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:50.595 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:50.595 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:50.595 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:50.595 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:50.595 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:50.595 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:50.595 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:50.595 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.595 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.595 14:31:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.502 14:31:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:52.502 00:21:52.502 real 0m10.007s 00:21:52.502 user 0m8.133s 00:21:52.502 sys 0m4.977s 00:21:52.502 14:31:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:52.502 14:31:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.502 ************************************ 00:21:52.502 END TEST nvmf_aer 00:21:52.502 ************************************ 00:21:52.502 14:31:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:52.502 14:31:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:52.502 14:31:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:52.502 14:31:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.762 ************************************ 00:21:52.762 START TEST nvmf_async_init 00:21:52.762 ************************************ 00:21:52.762 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:52.762 * Looking for test storage... 00:21:52.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:52.762 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:52.762 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:21:52.762 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:52.762 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:52.762 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:52.762 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:52.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.763 --rc genhtml_branch_coverage=1 00:21:52.763 --rc genhtml_function_coverage=1 00:21:52.763 --rc genhtml_legend=1 00:21:52.763 --rc geninfo_all_blocks=1 00:21:52.763 --rc geninfo_unexecuted_blocks=1 00:21:52.763 00:21:52.763 ' 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:52.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.763 --rc genhtml_branch_coverage=1 00:21:52.763 --rc genhtml_function_coverage=1 00:21:52.763 --rc genhtml_legend=1 00:21:52.763 --rc geninfo_all_blocks=1 00:21:52.763 --rc geninfo_unexecuted_blocks=1 00:21:52.763 00:21:52.763 ' 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:52.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.763 --rc genhtml_branch_coverage=1 00:21:52.763 --rc genhtml_function_coverage=1 00:21:52.763 --rc genhtml_legend=1 00:21:52.763 --rc geninfo_all_blocks=1 00:21:52.763 --rc geninfo_unexecuted_blocks=1 00:21:52.763 00:21:52.763 ' 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:52.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.763 --rc genhtml_branch_coverage=1 00:21:52.763 --rc genhtml_function_coverage=1 00:21:52.763 --rc genhtml_legend=1 00:21:52.763 --rc geninfo_all_blocks=1 00:21:52.763 --rc geninfo_unexecuted_blocks=1 00:21:52.763 00:21:52.763 ' 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:52.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:52.763 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:52.764 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:52.764 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:52.764 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:52.764 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:52.764 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:52.764 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=41bd77e977d54bc19cf861731ffc0744 00:21:52.764 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:52.764 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:52.764 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.764 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:52.764 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:52.764 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:52.764 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.764 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.764 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.764 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:52.764 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:52.764 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:52.764 14:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:59.338 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:59.338 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:59.338 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:59.339 Found net devices under 0000:86:00.0: cvl_0_0 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:59.339 Found net devices under 0000:86:00.1: cvl_0_1 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:59.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:21:59.339 00:21:59.339 --- 10.0.0.2 ping statistics --- 00:21:59.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.339 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:59.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:21:59.339 00:21:59.339 --- 10.0.0.1 ping statistics --- 00:21:59.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.339 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1539872 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1539872 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1539872 ']' 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.339 14:31:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.339 [2024-11-17 14:31:47.961920] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:21:59.339 [2024-11-17 14:31:47.961962] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.339 [2024-11-17 14:31:48.040257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.339 [2024-11-17 14:31:48.079671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.339 [2024-11-17 14:31:48.079708] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.339 [2024-11-17 14:31:48.079716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.339 [2024-11-17 14:31:48.079723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.339 [2024-11-17 14:31:48.079732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.339 [2024-11-17 14:31:48.080306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.339 [2024-11-17 14:31:48.228081] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.339 null0 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.339 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.340 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 41bd77e977d54bc19cf861731ffc0744 00:21:59.340 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.340 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.340 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.340 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:59.340 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.340 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.340 [2024-11-17 14:31:48.280367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.340 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.340 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:59.340 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.340 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.340 nvme0n1 00:21:59.340 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.340 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:59.340 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.340 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.340 [ 00:21:59.340 { 00:21:59.340 "name": "nvme0n1", 00:21:59.340 "aliases": [ 00:21:59.340 "41bd77e9-77d5-4bc1-9cf8-61731ffc0744" 00:21:59.340 ], 00:21:59.340 "product_name": "NVMe disk", 00:21:59.340 "block_size": 512, 00:21:59.340 "num_blocks": 2097152, 00:21:59.340 "uuid": "41bd77e9-77d5-4bc1-9cf8-61731ffc0744", 00:21:59.340 "numa_id": 1, 00:21:59.340 "assigned_rate_limits": { 00:21:59.340 "rw_ios_per_sec": 0, 00:21:59.340 "rw_mbytes_per_sec": 0, 00:21:59.340 "r_mbytes_per_sec": 0, 00:21:59.340 "w_mbytes_per_sec": 0 00:21:59.340 }, 00:21:59.340 "claimed": false, 00:21:59.340 "zoned": false, 00:21:59.340 "supported_io_types": { 00:21:59.340 "read": true, 00:21:59.340 "write": true, 00:21:59.340 "unmap": false, 00:21:59.340 "flush": true, 00:21:59.340 "reset": true, 00:21:59.340 "nvme_admin": true, 00:21:59.340 "nvme_io": true, 00:21:59.340 "nvme_io_md": false, 00:21:59.340 "write_zeroes": true, 00:21:59.340 "zcopy": false, 00:21:59.340 "get_zone_info": false, 00:21:59.340 "zone_management": false, 00:21:59.340 "zone_append": false, 00:21:59.340 "compare": true, 00:21:59.340 "compare_and_write": true, 00:21:59.340 "abort": true, 00:21:59.340 "seek_hole": false, 00:21:59.340 "seek_data": false, 00:21:59.340 "copy": true, 00:21:59.340 "nvme_iov_md": false 00:21:59.340 }, 00:21:59.340 "memory_domains": [ 00:21:59.340 { 00:21:59.340 "dma_device_id": "system", 00:21:59.340 "dma_device_type": 1 00:21:59.340 } 00:21:59.340 ], 00:21:59.340 "driver_specific": { 00:21:59.340 "nvme": [ 00:21:59.340 { 00:21:59.340 "trid": { 00:21:59.340 "trtype": "TCP", 00:21:59.340 "adrfam": "IPv4", 00:21:59.340 "traddr": "10.0.0.2", 00:21:59.340 "trsvcid": "4420", 00:21:59.340 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:59.340 }, 00:21:59.340 "ctrlr_data": { 00:21:59.340 "cntlid": 1, 00:21:59.340 "vendor_id": "0x8086", 00:21:59.340 "model_number": "SPDK bdev Controller", 00:21:59.340 "serial_number": "00000000000000000000", 00:21:59.340 "firmware_revision": "25.01", 00:21:59.340 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:59.340 "oacs": { 00:21:59.340 "security": 0, 00:21:59.340 "format": 0, 00:21:59.340 "firmware": 0, 00:21:59.340 "ns_manage": 0 00:21:59.340 }, 00:21:59.340 "multi_ctrlr": true, 00:21:59.340 "ana_reporting": false 00:21:59.340 }, 00:21:59.340 "vs": { 00:21:59.340 "nvme_version": "1.3" 00:21:59.340 }, 00:21:59.340 "ns_data": { 00:21:59.340 "id": 1, 00:21:59.340 "can_share": true 00:21:59.340 } 00:21:59.340 } 00:21:59.340 ], 00:21:59.340 "mp_policy": "active_passive" 00:21:59.340 } 00:21:59.340 } 00:21:59.340 ] 00:21:59.340 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.340 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:59.340 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.340 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.340 [2024-11-17 14:31:48.544901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:59.340 [2024-11-17 14:31:48.544959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10cffa0 (9): Bad file descriptor 00:21:59.599 [2024-11-17 14:31:48.676436] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.599 [ 00:21:59.599 { 00:21:59.599 "name": "nvme0n1", 00:21:59.599 "aliases": [ 00:21:59.599 "41bd77e9-77d5-4bc1-9cf8-61731ffc0744" 00:21:59.599 ], 00:21:59.599 "product_name": "NVMe disk", 00:21:59.599 "block_size": 512, 00:21:59.599 "num_blocks": 2097152, 00:21:59.599 "uuid": "41bd77e9-77d5-4bc1-9cf8-61731ffc0744", 00:21:59.599 "numa_id": 1, 00:21:59.599 "assigned_rate_limits": { 00:21:59.599 "rw_ios_per_sec": 0, 00:21:59.599 "rw_mbytes_per_sec": 0, 00:21:59.599 "r_mbytes_per_sec": 0, 00:21:59.599 "w_mbytes_per_sec": 0 00:21:59.599 }, 00:21:59.599 "claimed": false, 00:21:59.599 "zoned": false, 00:21:59.599 "supported_io_types": { 00:21:59.599 "read": true, 00:21:59.599 "write": true, 00:21:59.599 "unmap": false, 00:21:59.599 "flush": true, 00:21:59.599 "reset": true, 00:21:59.599 "nvme_admin": true, 00:21:59.599 "nvme_io": true, 00:21:59.599 "nvme_io_md": false, 00:21:59.599 "write_zeroes": true, 00:21:59.599 "zcopy": false, 00:21:59.599 "get_zone_info": false, 00:21:59.599 "zone_management": false, 00:21:59.599 "zone_append": false, 00:21:59.599 "compare": true, 00:21:59.599 "compare_and_write": true, 00:21:59.599 "abort": true, 00:21:59.599 "seek_hole": false, 00:21:59.599 "seek_data": false, 00:21:59.599 "copy": true, 00:21:59.599 "nvme_iov_md": false 00:21:59.599 }, 00:21:59.599 "memory_domains": [ 00:21:59.599 { 00:21:59.599 "dma_device_id": "system", 00:21:59.599 "dma_device_type": 1 00:21:59.599 } 00:21:59.599 ], 00:21:59.599 "driver_specific": { 00:21:59.599 "nvme": [ 00:21:59.599 { 00:21:59.599 "trid": { 00:21:59.599 "trtype": "TCP", 00:21:59.599 "adrfam": "IPv4", 00:21:59.599 "traddr": "10.0.0.2", 00:21:59.599 "trsvcid": "4420", 00:21:59.599 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:59.599 }, 00:21:59.599 "ctrlr_data": { 00:21:59.599 "cntlid": 2, 00:21:59.599 "vendor_id": "0x8086", 00:21:59.599 "model_number": "SPDK bdev Controller", 00:21:59.599 "serial_number": "00000000000000000000", 00:21:59.599 "firmware_revision": "25.01", 00:21:59.599 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:59.599 "oacs": { 00:21:59.599 "security": 0, 00:21:59.599 "format": 0, 00:21:59.599 "firmware": 0, 00:21:59.599 "ns_manage": 0 00:21:59.599 }, 00:21:59.599 "multi_ctrlr": true, 00:21:59.599 "ana_reporting": false 00:21:59.599 }, 00:21:59.599 "vs": { 00:21:59.599 "nvme_version": "1.3" 00:21:59.599 }, 00:21:59.599 "ns_data": { 00:21:59.599 "id": 1, 00:21:59.599 "can_share": true 00:21:59.599 } 00:21:59.599 } 00:21:59.599 ], 00:21:59.599 "mp_policy": "active_passive" 00:21:59.599 } 00:21:59.599 } 00:21:59.599 ] 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.HWQCvowCn0 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.HWQCvowCn0 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.HWQCvowCn0 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.599 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.599 [2024-11-17 14:31:48.745506] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:59.600 [2024-11-17 14:31:48.745601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:59.600 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.600 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:59.600 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.600 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.600 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.600 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:59.600 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.600 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.600 [2024-11-17 14:31:48.765573] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:59.859 nvme0n1 00:21:59.859 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.859 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:59.859 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.859 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.859 [ 00:21:59.859 { 00:21:59.859 "name": "nvme0n1", 00:21:59.859 "aliases": [ 00:21:59.859 "41bd77e9-77d5-4bc1-9cf8-61731ffc0744" 00:21:59.859 ], 00:21:59.859 "product_name": "NVMe disk", 00:21:59.859 "block_size": 512, 00:21:59.859 "num_blocks": 2097152, 00:21:59.859 "uuid": "41bd77e9-77d5-4bc1-9cf8-61731ffc0744", 00:21:59.859 "numa_id": 1, 00:21:59.859 "assigned_rate_limits": { 00:21:59.859 "rw_ios_per_sec": 0, 00:21:59.859 "rw_mbytes_per_sec": 0, 00:21:59.859 "r_mbytes_per_sec": 0, 00:21:59.859 "w_mbytes_per_sec": 0 00:21:59.859 }, 00:21:59.859 "claimed": false, 00:21:59.859 "zoned": false, 00:21:59.859 "supported_io_types": { 00:21:59.859 "read": true, 00:21:59.859 "write": true, 00:21:59.859 "unmap": false, 00:21:59.859 "flush": true, 00:21:59.859 "reset": true, 00:21:59.859 "nvme_admin": true, 00:21:59.859 "nvme_io": true, 00:21:59.859 "nvme_io_md": false, 00:21:59.859 "write_zeroes": true, 00:21:59.859 "zcopy": false, 00:21:59.859 "get_zone_info": false, 00:21:59.859 "zone_management": false, 00:21:59.859 "zone_append": false, 00:21:59.859 "compare": true, 00:21:59.859 "compare_and_write": true, 00:21:59.859 "abort": true, 00:21:59.859 "seek_hole": false, 00:21:59.859 "seek_data": false, 00:21:59.859 "copy": true, 00:21:59.859 "nvme_iov_md": false 00:21:59.859 }, 00:21:59.859 "memory_domains": [ 00:21:59.859 { 00:21:59.859 "dma_device_id": "system", 00:21:59.859 "dma_device_type": 1 00:21:59.859 } 00:21:59.859 ], 00:21:59.859 "driver_specific": { 00:21:59.859 "nvme": [ 00:21:59.859 { 00:21:59.859 "trid": { 00:21:59.859 "trtype": "TCP", 00:21:59.859 "adrfam": "IPv4", 00:21:59.859 "traddr": "10.0.0.2", 00:21:59.859 "trsvcid": "4421", 00:21:59.859 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:59.859 }, 00:21:59.859 "ctrlr_data": { 00:21:59.859 "cntlid": 3, 00:21:59.859 "vendor_id": "0x8086", 00:21:59.859 "model_number": "SPDK bdev Controller", 00:21:59.859 "serial_number": "00000000000000000000", 00:21:59.859 "firmware_revision": "25.01", 00:21:59.860 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:59.860 "oacs": { 00:21:59.860 "security": 0, 00:21:59.860 "format": 0, 00:21:59.860 "firmware": 0, 00:21:59.860 "ns_manage": 0 00:21:59.860 }, 00:21:59.860 "multi_ctrlr": true, 00:21:59.860 "ana_reporting": false 00:21:59.860 }, 00:21:59.860 "vs": { 00:21:59.860 "nvme_version": "1.3" 00:21:59.860 }, 00:21:59.860 "ns_data": { 00:21:59.860 "id": 1, 00:21:59.860 "can_share": true 00:21:59.860 } 00:21:59.860 } 00:21:59.860 ], 00:21:59.860 "mp_policy": "active_passive" 00:21:59.860 } 00:21:59.860 } 00:21:59.860 ] 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.HWQCvowCn0 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:59.860 rmmod nvme_tcp 00:21:59.860 rmmod nvme_fabrics 00:21:59.860 rmmod nvme_keyring 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1539872 ']' 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1539872 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1539872 ']' 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1539872 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.860 14:31:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1539872 00:21:59.860 14:31:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:59.860 14:31:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:59.860 14:31:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1539872' 00:21:59.860 killing process with pid 1539872 00:21:59.860 14:31:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1539872 00:21:59.860 14:31:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1539872 00:22:00.122 14:31:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:00.122 14:31:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:00.122 14:31:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:00.122 14:31:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:00.122 14:31:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:00.122 14:31:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:00.122 14:31:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:00.122 14:31:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:00.122 14:31:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:00.122 14:31:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.122 14:31:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.122 14:31:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.031 14:31:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:02.031 00:22:02.031 real 0m9.497s 00:22:02.031 user 0m3.086s 00:22:02.031 sys 0m4.852s 00:22:02.031 14:31:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:02.031 14:31:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.031 ************************************ 00:22:02.031 END TEST nvmf_async_init 00:22:02.031 ************************************ 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.291 ************************************ 00:22:02.291 START TEST dma 00:22:02.291 ************************************ 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:02.291 * Looking for test storage... 00:22:02.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:02.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.291 --rc genhtml_branch_coverage=1 00:22:02.291 --rc genhtml_function_coverage=1 00:22:02.291 --rc genhtml_legend=1 00:22:02.291 --rc geninfo_all_blocks=1 00:22:02.291 --rc geninfo_unexecuted_blocks=1 00:22:02.291 00:22:02.291 ' 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:02.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.291 --rc genhtml_branch_coverage=1 00:22:02.291 --rc genhtml_function_coverage=1 00:22:02.291 --rc genhtml_legend=1 00:22:02.291 --rc geninfo_all_blocks=1 00:22:02.291 --rc geninfo_unexecuted_blocks=1 00:22:02.291 00:22:02.291 ' 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:02.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.291 --rc genhtml_branch_coverage=1 00:22:02.291 --rc genhtml_function_coverage=1 00:22:02.291 --rc genhtml_legend=1 00:22:02.291 --rc geninfo_all_blocks=1 00:22:02.291 --rc geninfo_unexecuted_blocks=1 00:22:02.291 00:22:02.291 ' 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:02.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.291 --rc genhtml_branch_coverage=1 00:22:02.291 --rc genhtml_function_coverage=1 00:22:02.291 --rc genhtml_legend=1 00:22:02.291 --rc geninfo_all_blocks=1 00:22:02.291 --rc geninfo_unexecuted_blocks=1 00:22:02.291 00:22:02.291 ' 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.291 14:31:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.292 14:31:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.292 14:31:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.292 14:31:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:02.292 14:31:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.292 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:02.292 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:02.292 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:02.292 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.292 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.292 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.292 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:02.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:02.292 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:02.292 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:02.292 14:31:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:02.292 14:31:51 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:02.292 14:31:51 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:02.292 00:22:02.292 real 0m0.207s 00:22:02.292 user 0m0.127s 00:22:02.292 sys 0m0.094s 00:22:02.292 14:31:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:02.292 14:31:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:02.292 ************************************ 00:22:02.292 END TEST dma 00:22:02.292 ************************************ 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.551 ************************************ 00:22:02.551 START TEST nvmf_identify 00:22:02.551 ************************************ 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:02.551 * Looking for test storage... 00:22:02.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:02.551 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:02.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.552 --rc genhtml_branch_coverage=1 00:22:02.552 --rc genhtml_function_coverage=1 00:22:02.552 --rc genhtml_legend=1 00:22:02.552 --rc geninfo_all_blocks=1 00:22:02.552 --rc geninfo_unexecuted_blocks=1 00:22:02.552 00:22:02.552 ' 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:02.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.552 --rc genhtml_branch_coverage=1 00:22:02.552 --rc genhtml_function_coverage=1 00:22:02.552 --rc genhtml_legend=1 00:22:02.552 --rc geninfo_all_blocks=1 00:22:02.552 --rc geninfo_unexecuted_blocks=1 00:22:02.552 00:22:02.552 ' 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:02.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.552 --rc genhtml_branch_coverage=1 00:22:02.552 --rc genhtml_function_coverage=1 00:22:02.552 --rc genhtml_legend=1 00:22:02.552 --rc geninfo_all_blocks=1 00:22:02.552 --rc geninfo_unexecuted_blocks=1 00:22:02.552 00:22:02.552 ' 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:02.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.552 --rc genhtml_branch_coverage=1 00:22:02.552 --rc genhtml_function_coverage=1 00:22:02.552 --rc genhtml_legend=1 00:22:02.552 --rc geninfo_all_blocks=1 00:22:02.552 --rc geninfo_unexecuted_blocks=1 00:22:02.552 00:22:02.552 ' 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.552 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:02.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:02.811 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:02.811 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:02.811 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:02.811 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:02.811 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:02.811 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:02.811 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:02.811 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.811 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:02.811 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:02.811 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:02.811 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.811 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.811 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.811 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:02.811 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:02.811 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:02.811 14:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:09.381 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:09.381 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.381 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:09.382 Found net devices under 0000:86:00.0: cvl_0_0 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:09.382 Found net devices under 0000:86:00.1: cvl_0_1 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:09.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:22:09.382 00:22:09.382 --- 10.0.0.2 ping statistics --- 00:22:09.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.382 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:22:09.382 00:22:09.382 --- 10.0.0.1 ping statistics --- 00:22:09.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.382 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1543689 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1543689 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1543689 ']' 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.382 14:31:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.382 [2024-11-17 14:31:57.812217] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:22:09.382 [2024-11-17 14:31:57.812260] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.382 [2024-11-17 14:31:57.891969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:09.382 [2024-11-17 14:31:57.936515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.382 [2024-11-17 14:31:57.936552] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.382 [2024-11-17 14:31:57.936559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.382 [2024-11-17 14:31:57.936565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.382 [2024-11-17 14:31:57.936570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.382 [2024-11-17 14:31:57.938090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.382 [2024-11-17 14:31:57.938198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.382 [2024-11-17 14:31:57.938307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.382 [2024-11-17 14:31:57.938308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.642 [2024-11-17 14:31:58.653283] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.642 Malloc0 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.642 [2024-11-17 14:31:58.753848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.642 [ 00:22:09.642 { 00:22:09.642 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:09.642 "subtype": "Discovery", 00:22:09.642 "listen_addresses": [ 00:22:09.642 { 00:22:09.642 "trtype": "TCP", 00:22:09.642 "adrfam": "IPv4", 00:22:09.642 "traddr": "10.0.0.2", 00:22:09.642 "trsvcid": "4420" 00:22:09.642 } 00:22:09.642 ], 00:22:09.642 "allow_any_host": true, 00:22:09.642 "hosts": [] 00:22:09.642 }, 00:22:09.642 { 00:22:09.642 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.642 "subtype": "NVMe", 00:22:09.642 "listen_addresses": [ 00:22:09.642 { 00:22:09.642 "trtype": "TCP", 00:22:09.642 "adrfam": "IPv4", 00:22:09.642 "traddr": "10.0.0.2", 00:22:09.642 "trsvcid": "4420" 00:22:09.642 } 00:22:09.642 ], 00:22:09.642 "allow_any_host": true, 00:22:09.642 "hosts": [], 00:22:09.642 "serial_number": "SPDK00000000000001", 00:22:09.642 "model_number": "SPDK bdev Controller", 00:22:09.642 "max_namespaces": 32, 00:22:09.642 "min_cntlid": 1, 00:22:09.642 "max_cntlid": 65519, 00:22:09.642 "namespaces": [ 00:22:09.642 { 00:22:09.642 "nsid": 1, 00:22:09.642 "bdev_name": "Malloc0", 00:22:09.642 "name": "Malloc0", 00:22:09.642 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:09.642 "eui64": "ABCDEF0123456789", 00:22:09.642 "uuid": "a9240f20-c84b-4fa0-b09c-3b19c52a9083" 00:22:09.642 } 00:22:09.642 ] 00:22:09.642 } 00:22:09.642 ] 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.642 14:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:09.642 [2024-11-17 14:31:58.806397] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:22:09.642 [2024-11-17 14:31:58.806430] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1543934 ] 00:22:09.642 [2024-11-17 14:31:58.848270] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:09.642 [2024-11-17 14:31:58.848320] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:09.642 [2024-11-17 14:31:58.848328] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:09.642 [2024-11-17 14:31:58.848339] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:09.642 [2024-11-17 14:31:58.848349] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:09.642 [2024-11-17 14:31:58.852660] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:09.642 [2024-11-17 14:31:58.852693] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x18ef690 0 00:22:09.642 [2024-11-17 14:31:58.859363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:09.642 [2024-11-17 14:31:58.859378] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:09.642 [2024-11-17 14:31:58.859383] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:09.642 [2024-11-17 14:31:58.859386] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:09.642 [2024-11-17 14:31:58.859420] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.642 [2024-11-17 14:31:58.859425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.642 [2024-11-17 14:31:58.859429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ef690) 00:22:09.642 [2024-11-17 14:31:58.859444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:09.642 [2024-11-17 14:31:58.859462] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951100, cid 0, qid 0 00:22:09.907 [2024-11-17 14:31:58.867363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.907 [2024-11-17 14:31:58.867371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.907 [2024-11-17 14:31:58.867374] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.907 [2024-11-17 14:31:58.867379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951100) on tqpair=0x18ef690 00:22:09.907 [2024-11-17 14:31:58.867388] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:09.907 [2024-11-17 14:31:58.867394] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:09.907 [2024-11-17 14:31:58.867400] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:09.907 [2024-11-17 14:31:58.867413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.907 [2024-11-17 14:31:58.867416] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.907 [2024-11-17 14:31:58.867420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ef690) 00:22:09.907 [2024-11-17 14:31:58.867427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.907 [2024-11-17 14:31:58.867440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951100, cid 0, qid 0 00:22:09.907 [2024-11-17 14:31:58.867614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.907 [2024-11-17 14:31:58.867620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.907 [2024-11-17 14:31:58.867623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.907 [2024-11-17 14:31:58.867627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951100) on tqpair=0x18ef690 00:22:09.907 [2024-11-17 14:31:58.867633] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:09.907 [2024-11-17 14:31:58.867639] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:09.907 [2024-11-17 14:31:58.867646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.907 [2024-11-17 14:31:58.867650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.907 [2024-11-17 14:31:58.867653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ef690) 00:22:09.907 [2024-11-17 14:31:58.867662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.907 [2024-11-17 14:31:58.867673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951100, cid 0, qid 0 00:22:09.907 [2024-11-17 14:31:58.867738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.907 [2024-11-17 14:31:58.867744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.907 [2024-11-17 14:31:58.867747] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.907 [2024-11-17 14:31:58.867750] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951100) on tqpair=0x18ef690 00:22:09.907 [2024-11-17 14:31:58.867756] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:09.907 [2024-11-17 14:31:58.867764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:09.907 [2024-11-17 14:31:58.867770] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.907 [2024-11-17 14:31:58.867773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.907 [2024-11-17 14:31:58.867776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ef690) 00:22:09.907 [2024-11-17 14:31:58.867783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.907 [2024-11-17 14:31:58.867792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951100, cid 0, qid 0 00:22:09.907 [2024-11-17 14:31:58.867861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.907 [2024-11-17 14:31:58.867866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.907 [2024-11-17 14:31:58.867870] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.907 [2024-11-17 14:31:58.867873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951100) on tqpair=0x18ef690 00:22:09.907 [2024-11-17 14:31:58.867878] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:09.907 [2024-11-17 14:31:58.867887] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.907 [2024-11-17 14:31:58.867890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.907 [2024-11-17 14:31:58.867894] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ef690) 00:22:09.907 [2024-11-17 14:31:58.867899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.907 [2024-11-17 14:31:58.867908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951100, cid 0, qid 0 00:22:09.907 [2024-11-17 14:31:58.868012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.907 [2024-11-17 14:31:58.868018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.907 [2024-11-17 14:31:58.868021] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.907 [2024-11-17 14:31:58.868024] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951100) on tqpair=0x18ef690 00:22:09.907 [2024-11-17 14:31:58.868029] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:09.907 [2024-11-17 14:31:58.868033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:09.907 [2024-11-17 14:31:58.868040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:09.907 [2024-11-17 14:31:58.868148] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:09.907 [2024-11-17 14:31:58.868152] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:09.907 [2024-11-17 14:31:58.868162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.907 [2024-11-17 14:31:58.868165] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.907 [2024-11-17 14:31:58.868169] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ef690) 00:22:09.907 [2024-11-17 14:31:58.868175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.907 [2024-11-17 14:31:58.868184] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951100, cid 0, qid 0 00:22:09.907 [2024-11-17 14:31:58.868402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.907 [2024-11-17 14:31:58.868408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.907 [2024-11-17 14:31:58.868411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.907 [2024-11-17 14:31:58.868415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951100) on tqpair=0x18ef690 00:22:09.907 [2024-11-17 14:31:58.868419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:09.907 [2024-11-17 14:31:58.868427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.907 [2024-11-17 14:31:58.868431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.907 [2024-11-17 14:31:58.868434] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ef690) 00:22:09.907 [2024-11-17 14:31:58.868440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.907 [2024-11-17 14:31:58.868449] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951100, cid 0, qid 0 00:22:09.908 [2024-11-17 14:31:58.868511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.908 [2024-11-17 14:31:58.868517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.908 [2024-11-17 14:31:58.868520] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.868523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951100) on tqpair=0x18ef690 00:22:09.908 [2024-11-17 14:31:58.868527] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:09.908 [2024-11-17 14:31:58.868532] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:09.908 [2024-11-17 14:31:58.868539] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:09.908 [2024-11-17 14:31:58.868548] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:09.908 [2024-11-17 14:31:58.868556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.868560] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ef690) 00:22:09.908 [2024-11-17 14:31:58.868566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.908 [2024-11-17 14:31:58.868575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951100, cid 0, qid 0 00:22:09.908 [2024-11-17 14:31:58.868710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.908 [2024-11-17 14:31:58.868716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.908 [2024-11-17 14:31:58.868719] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.868723] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ef690): datao=0, datal=4096, cccid=0 00:22:09.908 [2024-11-17 14:31:58.868727] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1951100) on tqpair(0x18ef690): expected_datao=0, payload_size=4096 00:22:09.908 [2024-11-17 14:31:58.868732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.868740] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.868745] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.868772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.908 [2024-11-17 14:31:58.868777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.908 [2024-11-17 14:31:58.868780] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.868784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951100) on tqpair=0x18ef690 00:22:09.908 [2024-11-17 14:31:58.868790] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:09.908 [2024-11-17 14:31:58.868795] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:09.908 [2024-11-17 14:31:58.868799] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:09.908 [2024-11-17 14:31:58.868806] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:09.908 [2024-11-17 14:31:58.868811] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:09.908 [2024-11-17 14:31:58.868815] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:09.908 [2024-11-17 14:31:58.868826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:09.908 [2024-11-17 14:31:58.868832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.868836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.868839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ef690) 00:22:09.908 [2024-11-17 14:31:58.868846] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:09.908 [2024-11-17 14:31:58.868856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951100, cid 0, qid 0 00:22:09.908 [2024-11-17 14:31:58.868933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.908 [2024-11-17 14:31:58.868939] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.908 [2024-11-17 14:31:58.868942] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.868945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951100) on tqpair=0x18ef690 00:22:09.908 [2024-11-17 14:31:58.868952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.868956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.868959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ef690) 00:22:09.908 [2024-11-17 14:31:58.868964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.908 [2024-11-17 14:31:58.868970] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.868973] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.868977] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x18ef690) 00:22:09.908 [2024-11-17 14:31:58.868982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.908 [2024-11-17 14:31:58.868987] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.868990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.868993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x18ef690) 00:22:09.908 [2024-11-17 14:31:58.868999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.908 [2024-11-17 14:31:58.869005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.869009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.869012] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ef690) 00:22:09.908 [2024-11-17 14:31:58.869017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.908 [2024-11-17 14:31:58.869021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:09.908 [2024-11-17 14:31:58.869030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:09.908 [2024-11-17 14:31:58.869035] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.869039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ef690) 00:22:09.908 [2024-11-17 14:31:58.869045] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.908 [2024-11-17 14:31:58.869055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951100, cid 0, qid 0 00:22:09.908 [2024-11-17 14:31:58.869060] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951280, cid 1, qid 0 00:22:09.908 [2024-11-17 14:31:58.869064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951400, cid 2, qid 0 00:22:09.908 [2024-11-17 14:31:58.869068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951580, cid 3, qid 0 00:22:09.908 [2024-11-17 14:31:58.869072] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951700, cid 4, qid 0 00:22:09.908 [2024-11-17 14:31:58.869189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.908 [2024-11-17 14:31:58.869195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.908 [2024-11-17 14:31:58.869198] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.869201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951700) on tqpair=0x18ef690 00:22:09.908 [2024-11-17 14:31:58.869208] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:09.908 [2024-11-17 14:31:58.869213] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:09.908 [2024-11-17 14:31:58.869221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.869225] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ef690) 00:22:09.908 [2024-11-17 14:31:58.869231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.908 [2024-11-17 14:31:58.869240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951700, cid 4, qid 0 00:22:09.908 [2024-11-17 14:31:58.869316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.908 [2024-11-17 14:31:58.869322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.908 [2024-11-17 14:31:58.869325] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.869328] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ef690): datao=0, datal=4096, cccid=4 00:22:09.908 [2024-11-17 14:31:58.869332] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1951700) on tqpair(0x18ef690): expected_datao=0, payload_size=4096 00:22:09.908 [2024-11-17 14:31:58.869336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.869346] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.908 [2024-11-17 14:31:58.869350] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.912359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.909 [2024-11-17 14:31:58.912374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.909 [2024-11-17 14:31:58.912378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.912381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951700) on tqpair=0x18ef690 00:22:09.909 [2024-11-17 14:31:58.912395] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:09.909 [2024-11-17 14:31:58.912417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.912422] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ef690) 00:22:09.909 [2024-11-17 14:31:58.912429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.909 [2024-11-17 14:31:58.912436] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.912439] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.912443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18ef690) 00:22:09.909 [2024-11-17 14:31:58.912448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.909 [2024-11-17 14:31:58.912464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951700, cid 4, qid 0 00:22:09.909 [2024-11-17 14:31:58.912469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951880, cid 5, qid 0 00:22:09.909 [2024-11-17 14:31:58.912640] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.909 [2024-11-17 14:31:58.912646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.909 [2024-11-17 14:31:58.912649] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.912653] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ef690): datao=0, datal=1024, cccid=4 00:22:09.909 [2024-11-17 14:31:58.912657] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1951700) on tqpair(0x18ef690): expected_datao=0, payload_size=1024 00:22:09.909 [2024-11-17 14:31:58.912661] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.912667] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.912670] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.912675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.909 [2024-11-17 14:31:58.912680] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.909 [2024-11-17 14:31:58.912683] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.912687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951880) on tqpair=0x18ef690 00:22:09.909 [2024-11-17 14:31:58.953506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.909 [2024-11-17 14:31:58.953517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.909 [2024-11-17 14:31:58.953521] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.953525] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951700) on tqpair=0x18ef690 00:22:09.909 [2024-11-17 14:31:58.953537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.953541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ef690) 00:22:09.909 [2024-11-17 14:31:58.953548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.909 [2024-11-17 14:31:58.953565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951700, cid 4, qid 0 00:22:09.909 [2024-11-17 14:31:58.953688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.909 [2024-11-17 14:31:58.953693] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.909 [2024-11-17 14:31:58.953697] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.953703] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ef690): datao=0, datal=3072, cccid=4 00:22:09.909 [2024-11-17 14:31:58.953707] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1951700) on tqpair(0x18ef690): expected_datao=0, payload_size=3072 00:22:09.909 [2024-11-17 14:31:58.953711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.953761] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.953765] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.953806] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.909 [2024-11-17 14:31:58.953811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.909 [2024-11-17 14:31:58.953815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.953818] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951700) on tqpair=0x18ef690 00:22:09.909 [2024-11-17 14:31:58.953826] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.953830] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ef690) 00:22:09.909 [2024-11-17 14:31:58.953835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.909 [2024-11-17 14:31:58.953849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951700, cid 4, qid 0 00:22:09.909 [2024-11-17 14:31:58.953936] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.909 [2024-11-17 14:31:58.953942] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.909 [2024-11-17 14:31:58.953945] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.953948] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ef690): datao=0, datal=8, cccid=4 00:22:09.909 [2024-11-17 14:31:58.953952] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1951700) on tqpair(0x18ef690): expected_datao=0, payload_size=8 00:22:09.909 [2024-11-17 14:31:58.953956] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.953961] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.953965] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.994499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.909 [2024-11-17 14:31:58.994511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.909 [2024-11-17 14:31:58.994514] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.909 [2024-11-17 14:31:58.994518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951700) on tqpair=0x18ef690 00:22:09.909 ===================================================== 00:22:09.909 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:09.909 ===================================================== 00:22:09.909 Controller Capabilities/Features 00:22:09.909 ================================ 00:22:09.909 Vendor ID: 0000 00:22:09.909 Subsystem Vendor ID: 0000 00:22:09.909 Serial Number: .................... 00:22:09.909 Model Number: ........................................ 00:22:09.909 Firmware Version: 25.01 00:22:09.909 Recommended Arb Burst: 0 00:22:09.909 IEEE OUI Identifier: 00 00 00 00:22:09.909 Multi-path I/O 00:22:09.909 May have multiple subsystem ports: No 00:22:09.909 May have multiple controllers: No 00:22:09.909 Associated with SR-IOV VF: No 00:22:09.909 Max Data Transfer Size: 131072 00:22:09.909 Max Number of Namespaces: 0 00:22:09.909 Max Number of I/O Queues: 1024 00:22:09.909 NVMe Specification Version (VS): 1.3 00:22:09.909 NVMe Specification Version (Identify): 1.3 00:22:09.909 Maximum Queue Entries: 128 00:22:09.909 Contiguous Queues Required: Yes 00:22:09.909 Arbitration Mechanisms Supported 00:22:09.909 Weighted Round Robin: Not Supported 00:22:09.909 Vendor Specific: Not Supported 00:22:09.909 Reset Timeout: 15000 ms 00:22:09.909 Doorbell Stride: 4 bytes 00:22:09.909 NVM Subsystem Reset: Not Supported 00:22:09.909 Command Sets Supported 00:22:09.909 NVM Command Set: Supported 00:22:09.909 Boot Partition: Not Supported 00:22:09.909 Memory Page Size Minimum: 4096 bytes 00:22:09.909 Memory Page Size Maximum: 4096 bytes 00:22:09.909 Persistent Memory Region: Not Supported 00:22:09.909 Optional Asynchronous Events Supported 00:22:09.909 Namespace Attribute Notices: Not Supported 00:22:09.909 Firmware Activation Notices: Not Supported 00:22:09.910 ANA Change Notices: Not Supported 00:22:09.910 PLE Aggregate Log Change Notices: Not Supported 00:22:09.910 LBA Status Info Alert Notices: Not Supported 00:22:09.910 EGE Aggregate Log Change Notices: Not Supported 00:22:09.910 Normal NVM Subsystem Shutdown event: Not Supported 00:22:09.910 Zone Descriptor Change Notices: Not Supported 00:22:09.910 Discovery Log Change Notices: Supported 00:22:09.910 Controller Attributes 00:22:09.910 128-bit Host Identifier: Not Supported 00:22:09.910 Non-Operational Permissive Mode: Not Supported 00:22:09.910 NVM Sets: Not Supported 00:22:09.910 Read Recovery Levels: Not Supported 00:22:09.910 Endurance Groups: Not Supported 00:22:09.910 Predictable Latency Mode: Not Supported 00:22:09.910 Traffic Based Keep ALive: Not Supported 00:22:09.910 Namespace Granularity: Not Supported 00:22:09.910 SQ Associations: Not Supported 00:22:09.910 UUID List: Not Supported 00:22:09.910 Multi-Domain Subsystem: Not Supported 00:22:09.910 Fixed Capacity Management: Not Supported 00:22:09.910 Variable Capacity Management: Not Supported 00:22:09.910 Delete Endurance Group: Not Supported 00:22:09.910 Delete NVM Set: Not Supported 00:22:09.910 Extended LBA Formats Supported: Not Supported 00:22:09.910 Flexible Data Placement Supported: Not Supported 00:22:09.910 00:22:09.910 Controller Memory Buffer Support 00:22:09.910 ================================ 00:22:09.910 Supported: No 00:22:09.910 00:22:09.910 Persistent Memory Region Support 00:22:09.910 ================================ 00:22:09.910 Supported: No 00:22:09.910 00:22:09.910 Admin Command Set Attributes 00:22:09.910 ============================ 00:22:09.910 Security Send/Receive: Not Supported 00:22:09.910 Format NVM: Not Supported 00:22:09.910 Firmware Activate/Download: Not Supported 00:22:09.910 Namespace Management: Not Supported 00:22:09.910 Device Self-Test: Not Supported 00:22:09.910 Directives: Not Supported 00:22:09.910 NVMe-MI: Not Supported 00:22:09.910 Virtualization Management: Not Supported 00:22:09.910 Doorbell Buffer Config: Not Supported 00:22:09.910 Get LBA Status Capability: Not Supported 00:22:09.910 Command & Feature Lockdown Capability: Not Supported 00:22:09.910 Abort Command Limit: 1 00:22:09.910 Async Event Request Limit: 4 00:22:09.910 Number of Firmware Slots: N/A 00:22:09.910 Firmware Slot 1 Read-Only: N/A 00:22:09.910 Firmware Activation Without Reset: N/A 00:22:09.910 Multiple Update Detection Support: N/A 00:22:09.910 Firmware Update Granularity: No Information Provided 00:22:09.910 Per-Namespace SMART Log: No 00:22:09.910 Asymmetric Namespace Access Log Page: Not Supported 00:22:09.910 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:09.910 Command Effects Log Page: Not Supported 00:22:09.910 Get Log Page Extended Data: Supported 00:22:09.910 Telemetry Log Pages: Not Supported 00:22:09.910 Persistent Event Log Pages: Not Supported 00:22:09.910 Supported Log Pages Log Page: May Support 00:22:09.910 Commands Supported & Effects Log Page: Not Supported 00:22:09.910 Feature Identifiers & Effects Log Page:May Support 00:22:09.910 NVMe-MI Commands & Effects Log Page: May Support 00:22:09.910 Data Area 4 for Telemetry Log: Not Supported 00:22:09.910 Error Log Page Entries Supported: 128 00:22:09.910 Keep Alive: Not Supported 00:22:09.910 00:22:09.910 NVM Command Set Attributes 00:22:09.910 ========================== 00:22:09.910 Submission Queue Entry Size 00:22:09.910 Max: 1 00:22:09.910 Min: 1 00:22:09.910 Completion Queue Entry Size 00:22:09.910 Max: 1 00:22:09.910 Min: 1 00:22:09.910 Number of Namespaces: 0 00:22:09.910 Compare Command: Not Supported 00:22:09.910 Write Uncorrectable Command: Not Supported 00:22:09.910 Dataset Management Command: Not Supported 00:22:09.910 Write Zeroes Command: Not Supported 00:22:09.910 Set Features Save Field: Not Supported 00:22:09.910 Reservations: Not Supported 00:22:09.910 Timestamp: Not Supported 00:22:09.910 Copy: Not Supported 00:22:09.910 Volatile Write Cache: Not Present 00:22:09.910 Atomic Write Unit (Normal): 1 00:22:09.910 Atomic Write Unit (PFail): 1 00:22:09.910 Atomic Compare & Write Unit: 1 00:22:09.910 Fused Compare & Write: Supported 00:22:09.910 Scatter-Gather List 00:22:09.910 SGL Command Set: Supported 00:22:09.910 SGL Keyed: Supported 00:22:09.910 SGL Bit Bucket Descriptor: Not Supported 00:22:09.910 SGL Metadata Pointer: Not Supported 00:22:09.910 Oversized SGL: Not Supported 00:22:09.910 SGL Metadata Address: Not Supported 00:22:09.910 SGL Offset: Supported 00:22:09.910 Transport SGL Data Block: Not Supported 00:22:09.910 Replay Protected Memory Block: Not Supported 00:22:09.910 00:22:09.910 Firmware Slot Information 00:22:09.910 ========================= 00:22:09.910 Active slot: 0 00:22:09.910 00:22:09.910 00:22:09.910 Error Log 00:22:09.910 ========= 00:22:09.910 00:22:09.910 Active Namespaces 00:22:09.910 ================= 00:22:09.910 Discovery Log Page 00:22:09.910 ================== 00:22:09.910 Generation Counter: 2 00:22:09.910 Number of Records: 2 00:22:09.910 Record Format: 0 00:22:09.910 00:22:09.910 Discovery Log Entry 0 00:22:09.910 ---------------------- 00:22:09.910 Transport Type: 3 (TCP) 00:22:09.910 Address Family: 1 (IPv4) 00:22:09.910 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:09.910 Entry Flags: 00:22:09.910 Duplicate Returned Information: 1 00:22:09.910 Explicit Persistent Connection Support for Discovery: 1 00:22:09.910 Transport Requirements: 00:22:09.910 Secure Channel: Not Required 00:22:09.910 Port ID: 0 (0x0000) 00:22:09.910 Controller ID: 65535 (0xffff) 00:22:09.910 Admin Max SQ Size: 128 00:22:09.910 Transport Service Identifier: 4420 00:22:09.910 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:09.910 Transport Address: 10.0.0.2 00:22:09.910 Discovery Log Entry 1 00:22:09.910 ---------------------- 00:22:09.910 Transport Type: 3 (TCP) 00:22:09.910 Address Family: 1 (IPv4) 00:22:09.910 Subsystem Type: 2 (NVM Subsystem) 00:22:09.910 Entry Flags: 00:22:09.910 Duplicate Returned Information: 0 00:22:09.910 Explicit Persistent Connection Support for Discovery: 0 00:22:09.910 Transport Requirements: 00:22:09.910 Secure Channel: Not Required 00:22:09.910 Port ID: 0 (0x0000) 00:22:09.910 Controller ID: 65535 (0xffff) 00:22:09.910 Admin Max SQ Size: 128 00:22:09.910 Transport Service Identifier: 4420 00:22:09.910 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:09.910 Transport Address: 10.0.0.2 [2024-11-17 14:31:58.994600] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:09.911 [2024-11-17 14:31:58.994612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951100) on tqpair=0x18ef690 00:22:09.911 [2024-11-17 14:31:58.994619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.911 [2024-11-17 14:31:58.994624] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951280) on tqpair=0x18ef690 00:22:09.911 [2024-11-17 14:31:58.994628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.911 [2024-11-17 14:31:58.994633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951400) on tqpair=0x18ef690 00:22:09.911 [2024-11-17 14:31:58.994637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.911 [2024-11-17 14:31:58.994641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951580) on tqpair=0x18ef690 00:22:09.911 [2024-11-17 14:31:58.994645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.911 [2024-11-17 14:31:58.994657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:58.994661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:58.994664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ef690) 00:22:09.911 [2024-11-17 14:31:58.994672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.911 [2024-11-17 14:31:58.994685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951580, cid 3, qid 0 00:22:09.911 [2024-11-17 14:31:58.994750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.911 [2024-11-17 14:31:58.994756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.911 [2024-11-17 14:31:58.994759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:58.994763] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951580) on tqpair=0x18ef690 00:22:09.911 [2024-11-17 14:31:58.994769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:58.994772] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:58.994776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ef690) 00:22:09.911 [2024-11-17 14:31:58.994781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.911 [2024-11-17 14:31:58.994794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951580, cid 3, qid 0 00:22:09.911 [2024-11-17 14:31:58.994877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.911 [2024-11-17 14:31:58.994883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.911 [2024-11-17 14:31:58.994886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:58.994889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951580) on tqpair=0x18ef690 00:22:09.911 [2024-11-17 14:31:58.994894] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:09.911 [2024-11-17 14:31:58.994898] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:09.911 [2024-11-17 14:31:58.994907] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:58.994911] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:58.994914] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ef690) 00:22:09.911 [2024-11-17 14:31:58.994920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.911 [2024-11-17 14:31:58.994930] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951580, cid 3, qid 0 00:22:09.911 [2024-11-17 14:31:58.995002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.911 [2024-11-17 14:31:58.995008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.911 [2024-11-17 14:31:58.995011] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:58.995014] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951580) on tqpair=0x18ef690 00:22:09.911 [2024-11-17 14:31:58.995023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:58.995027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:58.995030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ef690) 00:22:09.911 [2024-11-17 14:31:58.995036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.911 [2024-11-17 14:31:58.995045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951580, cid 3, qid 0 00:22:09.911 [2024-11-17 14:31:58.995153] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.911 [2024-11-17 14:31:58.995159] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.911 [2024-11-17 14:31:58.995164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:58.995167] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951580) on tqpair=0x18ef690 00:22:09.911 [2024-11-17 14:31:58.995175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:58.995179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:58.995182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ef690) 00:22:09.911 [2024-11-17 14:31:58.995188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.911 [2024-11-17 14:31:58.995198] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951580, cid 3, qid 0 00:22:09.911 [2024-11-17 14:31:58.995304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.911 [2024-11-17 14:31:58.995310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.911 [2024-11-17 14:31:58.995313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:58.995316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951580) on tqpair=0x18ef690 00:22:09.911 [2024-11-17 14:31:58.995325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:58.995328] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:58.995331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ef690) 00:22:09.911 [2024-11-17 14:31:58.995337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.911 [2024-11-17 14:31:58.995346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951580, cid 3, qid 0 00:22:09.911 [2024-11-17 14:31:58.999359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.911 [2024-11-17 14:31:58.999367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.911 [2024-11-17 14:31:58.999370] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:58.999373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951580) on tqpair=0x18ef690 00:22:09.911 [2024-11-17 14:31:58.999384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:58.999388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:58.999391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ef690) 00:22:09.911 [2024-11-17 14:31:58.999397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.911 [2024-11-17 14:31:58.999408] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1951580, cid 3, qid 0 00:22:09.911 [2024-11-17 14:31:58.999546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.911 [2024-11-17 14:31:58.999552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.911 [2024-11-17 14:31:58.999555] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:58.999558] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1951580) on tqpair=0x18ef690 00:22:09.911 [2024-11-17 14:31:58.999565] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:22:09.911 00:22:09.911 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:09.911 [2024-11-17 14:31:59.038138] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:22:09.911 [2024-11-17 14:31:59.038172] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1543940 ] 00:22:09.911 [2024-11-17 14:31:59.078003] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:09.911 [2024-11-17 14:31:59.078041] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:09.911 [2024-11-17 14:31:59.078046] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:09.911 [2024-11-17 14:31:59.078057] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:09.911 [2024-11-17 14:31:59.078066] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:09.911 [2024-11-17 14:31:59.081539] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:09.911 [2024-11-17 14:31:59.081564] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd60690 0 00:22:09.911 [2024-11-17 14:31:59.089365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:09.911 [2024-11-17 14:31:59.089377] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:09.911 [2024-11-17 14:31:59.089381] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:09.911 [2024-11-17 14:31:59.089385] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:09.911 [2024-11-17 14:31:59.089409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:59.089414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.911 [2024-11-17 14:31:59.089418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd60690) 00:22:09.912 [2024-11-17 14:31:59.089428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:09.912 [2024-11-17 14:31:59.089444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2100, cid 0, qid 0 00:22:09.912 [2024-11-17 14:31:59.096362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.912 [2024-11-17 14:31:59.096370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.912 [2024-11-17 14:31:59.096373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.912 [2024-11-17 14:31:59.096378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2100) on tqpair=0xd60690 00:22:09.912 [2024-11-17 14:31:59.096386] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:09.912 [2024-11-17 14:31:59.096392] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:09.912 [2024-11-17 14:31:59.096397] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:09.912 [2024-11-17 14:31:59.096408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.912 [2024-11-17 14:31:59.096412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.912 [2024-11-17 14:31:59.096415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd60690) 00:22:09.912 [2024-11-17 14:31:59.096422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.912 [2024-11-17 14:31:59.096436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2100, cid 0, qid 0 00:22:09.912 [2024-11-17 14:31:59.096589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.912 [2024-11-17 14:31:59.096595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.912 [2024-11-17 14:31:59.096598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.912 [2024-11-17 14:31:59.096601] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2100) on tqpair=0xd60690 00:22:09.912 [2024-11-17 14:31:59.096606] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:09.912 [2024-11-17 14:31:59.096613] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:09.912 [2024-11-17 14:31:59.096621] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.912 [2024-11-17 14:31:59.096625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.912 [2024-11-17 14:31:59.096628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd60690) 00:22:09.912 [2024-11-17 14:31:59.096634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.912 [2024-11-17 14:31:59.096644] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2100, cid 0, qid 0 00:22:09.912 [2024-11-17 14:31:59.096706] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.912 [2024-11-17 14:31:59.096712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.912 [2024-11-17 14:31:59.096715] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.912 [2024-11-17 14:31:59.096718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2100) on tqpair=0xd60690 00:22:09.912 [2024-11-17 14:31:59.096722] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:09.912 [2024-11-17 14:31:59.096729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:09.912 [2024-11-17 14:31:59.096735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.912 [2024-11-17 14:31:59.096738] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.912 [2024-11-17 14:31:59.096741] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd60690) 00:22:09.912 [2024-11-17 14:31:59.096747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.912 [2024-11-17 14:31:59.096756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2100, cid 0, qid 0 00:22:09.912 [2024-11-17 14:31:59.096823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.912 [2024-11-17 14:31:59.096829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.912 [2024-11-17 14:31:59.096832] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.912 [2024-11-17 14:31:59.096835] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2100) on tqpair=0xd60690 00:22:09.912 [2024-11-17 14:31:59.096840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:09.912 [2024-11-17 14:31:59.096848] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.912 [2024-11-17 14:31:59.096851] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.912 [2024-11-17 14:31:59.096854] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd60690) 00:22:09.912 [2024-11-17 14:31:59.096860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.912 [2024-11-17 14:31:59.096869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2100, cid 0, qid 0 00:22:09.912 [2024-11-17 14:31:59.096941] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.912 [2024-11-17 14:31:59.096947] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.912 [2024-11-17 14:31:59.096949] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.912 [2024-11-17 14:31:59.096953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2100) on tqpair=0xd60690 00:22:09.912 [2024-11-17 14:31:59.096956] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:09.912 [2024-11-17 14:31:59.096961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:09.912 [2024-11-17 14:31:59.096968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:09.912 [2024-11-17 14:31:59.097075] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:09.912 [2024-11-17 14:31:59.097081] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:09.912 [2024-11-17 14:31:59.097088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.912 [2024-11-17 14:31:59.097091] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.912 [2024-11-17 14:31:59.097094] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd60690) 00:22:09.912 [2024-11-17 14:31:59.097100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.912 [2024-11-17 14:31:59.097110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2100, cid 0, qid 0 00:22:09.912 [2024-11-17 14:31:59.097172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.912 [2024-11-17 14:31:59.097178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.912 [2024-11-17 14:31:59.097181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.912 [2024-11-17 14:31:59.097184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2100) on tqpair=0xd60690 00:22:09.913 [2024-11-17 14:31:59.097188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:09.913 [2024-11-17 14:31:59.097196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097202] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd60690) 00:22:09.913 [2024-11-17 14:31:59.097208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.913 [2024-11-17 14:31:59.097217] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2100, cid 0, qid 0 00:22:09.913 [2024-11-17 14:31:59.097289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.913 [2024-11-17 14:31:59.097294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.913 [2024-11-17 14:31:59.097297] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2100) on tqpair=0xd60690 00:22:09.913 [2024-11-17 14:31:59.097304] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:09.913 [2024-11-17 14:31:59.097308] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:09.913 [2024-11-17 14:31:59.097315] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:09.913 [2024-11-17 14:31:59.097321] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:09.913 [2024-11-17 14:31:59.097329] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097332] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd60690) 00:22:09.913 [2024-11-17 14:31:59.097338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.913 [2024-11-17 14:31:59.097347] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2100, cid 0, qid 0 00:22:09.913 [2024-11-17 14:31:59.097439] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.913 [2024-11-17 14:31:59.097445] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.913 [2024-11-17 14:31:59.097449] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097452] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd60690): datao=0, datal=4096, cccid=0 00:22:09.913 [2024-11-17 14:31:59.097459] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc2100) on tqpair(0xd60690): expected_datao=0, payload_size=4096 00:22:09.913 [2024-11-17 14:31:59.097463] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097470] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097473] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097488] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.913 [2024-11-17 14:31:59.097493] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.913 [2024-11-17 14:31:59.097496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2100) on tqpair=0xd60690 00:22:09.913 [2024-11-17 14:31:59.097506] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:09.913 [2024-11-17 14:31:59.097510] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:09.913 [2024-11-17 14:31:59.097514] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:09.913 [2024-11-17 14:31:59.097519] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:09.913 [2024-11-17 14:31:59.097523] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:09.913 [2024-11-17 14:31:59.097528] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:09.913 [2024-11-17 14:31:59.097538] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:09.913 [2024-11-17 14:31:59.097544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097550] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd60690) 00:22:09.913 [2024-11-17 14:31:59.097556] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:09.913 [2024-11-17 14:31:59.097567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2100, cid 0, qid 0 00:22:09.913 [2024-11-17 14:31:59.097632] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.913 [2024-11-17 14:31:59.097637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.913 [2024-11-17 14:31:59.097640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097643] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2100) on tqpair=0xd60690 00:22:09.913 [2024-11-17 14:31:59.097649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097652] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097655] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd60690) 00:22:09.913 [2024-11-17 14:31:59.097660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.913 [2024-11-17 14:31:59.097665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd60690) 00:22:09.913 [2024-11-17 14:31:59.097676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.913 [2024-11-17 14:31:59.097681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097687] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd60690) 00:22:09.913 [2024-11-17 14:31:59.097695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.913 [2024-11-17 14:31:59.097700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097703] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd60690) 00:22:09.913 [2024-11-17 14:31:59.097711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.913 [2024-11-17 14:31:59.097715] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:09.913 [2024-11-17 14:31:59.097722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:09.913 [2024-11-17 14:31:59.097728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd60690) 00:22:09.913 [2024-11-17 14:31:59.097737] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.913 [2024-11-17 14:31:59.097748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2100, cid 0, qid 0 00:22:09.913 [2024-11-17 14:31:59.097752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2280, cid 1, qid 0 00:22:09.913 [2024-11-17 14:31:59.097756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2400, cid 2, qid 0 00:22:09.913 [2024-11-17 14:31:59.097761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2580, cid 3, qid 0 00:22:09.913 [2024-11-17 14:31:59.097764] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2700, cid 4, qid 0 00:22:09.913 [2024-11-17 14:31:59.097866] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.913 [2024-11-17 14:31:59.097872] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.913 [2024-11-17 14:31:59.097875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097878] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2700) on tqpair=0xd60690 00:22:09.913 [2024-11-17 14:31:59.097884] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:09.913 [2024-11-17 14:31:59.097888] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:09.913 [2024-11-17 14:31:59.097896] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:09.913 [2024-11-17 14:31:59.097901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:09.913 [2024-11-17 14:31:59.097907] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.913 [2024-11-17 14:31:59.097913] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd60690) 00:22:09.913 [2024-11-17 14:31:59.097918] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:09.913 [2024-11-17 14:31:59.097928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2700, cid 4, qid 0 00:22:09.914 [2024-11-17 14:31:59.097992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.914 [2024-11-17 14:31:59.097998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.914 [2024-11-17 14:31:59.098001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2700) on tqpair=0xd60690 00:22:09.914 [2024-11-17 14:31:59.098056] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:09.914 [2024-11-17 14:31:59.098066] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:09.914 [2024-11-17 14:31:59.098072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd60690) 00:22:09.914 [2024-11-17 14:31:59.098081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.914 [2024-11-17 14:31:59.098091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2700, cid 4, qid 0 00:22:09.914 [2024-11-17 14:31:59.098165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.914 [2024-11-17 14:31:59.098171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.914 [2024-11-17 14:31:59.098174] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098177] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd60690): datao=0, datal=4096, cccid=4 00:22:09.914 [2024-11-17 14:31:59.098181] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc2700) on tqpair(0xd60690): expected_datao=0, payload_size=4096 00:22:09.914 [2024-11-17 14:31:59.098185] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098190] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098194] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.914 [2024-11-17 14:31:59.098209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.914 [2024-11-17 14:31:59.098212] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098215] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2700) on tqpair=0xd60690 00:22:09.914 [2024-11-17 14:31:59.098224] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:09.914 [2024-11-17 14:31:59.098236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:09.914 [2024-11-17 14:31:59.098245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:09.914 [2024-11-17 14:31:59.098252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098255] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd60690) 00:22:09.914 [2024-11-17 14:31:59.098260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.914 [2024-11-17 14:31:59.098271] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2700, cid 4, qid 0 00:22:09.914 [2024-11-17 14:31:59.098357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.914 [2024-11-17 14:31:59.098363] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.914 [2024-11-17 14:31:59.098366] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098369] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd60690): datao=0, datal=4096, cccid=4 00:22:09.914 [2024-11-17 14:31:59.098373] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc2700) on tqpair(0xd60690): expected_datao=0, payload_size=4096 00:22:09.914 [2024-11-17 14:31:59.098377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098387] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098391] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.914 [2024-11-17 14:31:59.098433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.914 [2024-11-17 14:31:59.098436] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2700) on tqpair=0xd60690 00:22:09.914 [2024-11-17 14:31:59.098449] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:09.914 [2024-11-17 14:31:59.098458] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:09.914 [2024-11-17 14:31:59.098464] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098468] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd60690) 00:22:09.914 [2024-11-17 14:31:59.098473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.914 [2024-11-17 14:31:59.098484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2700, cid 4, qid 0 00:22:09.914 [2024-11-17 14:31:59.098560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.914 [2024-11-17 14:31:59.098565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.914 [2024-11-17 14:31:59.098568] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098572] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd60690): datao=0, datal=4096, cccid=4 00:22:09.914 [2024-11-17 14:31:59.098575] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc2700) on tqpair(0xd60690): expected_datao=0, payload_size=4096 00:22:09.914 [2024-11-17 14:31:59.098579] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098585] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098588] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.914 [2024-11-17 14:31:59.098606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.914 [2024-11-17 14:31:59.098609] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2700) on tqpair=0xd60690 00:22:09.914 [2024-11-17 14:31:59.098618] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:09.914 [2024-11-17 14:31:59.098625] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:09.914 [2024-11-17 14:31:59.098632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:09.914 [2024-11-17 14:31:59.098638] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:09.914 [2024-11-17 14:31:59.098642] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:09.914 [2024-11-17 14:31:59.098647] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:09.914 [2024-11-17 14:31:59.098651] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:09.914 [2024-11-17 14:31:59.098655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:09.914 [2024-11-17 14:31:59.098660] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:09.914 [2024-11-17 14:31:59.098672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098677] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd60690) 00:22:09.914 [2024-11-17 14:31:59.098683] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.914 [2024-11-17 14:31:59.098688] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098691] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd60690) 00:22:09.914 [2024-11-17 14:31:59.098700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.914 [2024-11-17 14:31:59.098711] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2700, cid 4, qid 0 00:22:09.914 [2024-11-17 14:31:59.098716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2880, cid 5, qid 0 00:22:09.914 [2024-11-17 14:31:59.098796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.914 [2024-11-17 14:31:59.098802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.914 [2024-11-17 14:31:59.098805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.914 [2024-11-17 14:31:59.098808] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2700) on tqpair=0xd60690 00:22:09.914 [2024-11-17 14:31:59.098814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.914 [2024-11-17 14:31:59.098819] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.915 [2024-11-17 14:31:59.098822] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.098825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2880) on tqpair=0xd60690 00:22:09.915 [2024-11-17 14:31:59.098833] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.098836] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd60690) 00:22:09.915 [2024-11-17 14:31:59.098841] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.915 [2024-11-17 14:31:59.098851] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2880, cid 5, qid 0 00:22:09.915 [2024-11-17 14:31:59.098916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.915 [2024-11-17 14:31:59.098921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.915 [2024-11-17 14:31:59.098924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.098928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2880) on tqpair=0xd60690 00:22:09.915 [2024-11-17 14:31:59.098935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.098939] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd60690) 00:22:09.915 [2024-11-17 14:31:59.098944] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.915 [2024-11-17 14:31:59.098953] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2880, cid 5, qid 0 00:22:09.915 [2024-11-17 14:31:59.099012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.915 [2024-11-17 14:31:59.099017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.915 [2024-11-17 14:31:59.099020] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.099023] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2880) on tqpair=0xd60690 00:22:09.915 [2024-11-17 14:31:59.099031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.099034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd60690) 00:22:09.915 [2024-11-17 14:31:59.099040] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.915 [2024-11-17 14:31:59.099050] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2880, cid 5, qid 0 00:22:09.915 [2024-11-17 14:31:59.099109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.915 [2024-11-17 14:31:59.099115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.915 [2024-11-17 14:31:59.099118] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.099121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2880) on tqpair=0xd60690 00:22:09.915 [2024-11-17 14:31:59.099134] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.099137] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd60690) 00:22:09.915 [2024-11-17 14:31:59.099143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.915 [2024-11-17 14:31:59.099149] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.099152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd60690) 00:22:09.915 [2024-11-17 14:31:59.099157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.915 [2024-11-17 14:31:59.099163] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.099167] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xd60690) 00:22:09.915 [2024-11-17 14:31:59.099172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.915 [2024-11-17 14:31:59.099178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.099181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd60690) 00:22:09.915 [2024-11-17 14:31:59.099187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.915 [2024-11-17 14:31:59.099197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2880, cid 5, qid 0 00:22:09.915 [2024-11-17 14:31:59.099201] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2700, cid 4, qid 0 00:22:09.915 [2024-11-17 14:31:59.099205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2a00, cid 6, qid 0 00:22:09.915 [2024-11-17 14:31:59.099209] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2b80, cid 7, qid 0 00:22:09.915 [2024-11-17 14:31:59.102363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.915 [2024-11-17 14:31:59.102371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.915 [2024-11-17 14:31:59.102374] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.102377] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd60690): datao=0, datal=8192, cccid=5 00:22:09.915 [2024-11-17 14:31:59.102381] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc2880) on tqpair(0xd60690): expected_datao=0, payload_size=8192 00:22:09.915 [2024-11-17 14:31:59.102385] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.102391] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.102394] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.102399] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.915 [2024-11-17 14:31:59.102404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.915 [2024-11-17 14:31:59.102407] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.102410] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd60690): datao=0, datal=512, cccid=4 00:22:09.915 [2024-11-17 14:31:59.102414] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc2700) on tqpair(0xd60690): expected_datao=0, payload_size=512 00:22:09.915 [2024-11-17 14:31:59.102418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.102427] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.102430] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.102435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.915 [2024-11-17 14:31:59.102440] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.915 [2024-11-17 14:31:59.102451] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.102455] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd60690): datao=0, datal=512, cccid=6 00:22:09.915 [2024-11-17 14:31:59.102458] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc2a00) on tqpair(0xd60690): expected_datao=0, payload_size=512 00:22:09.915 [2024-11-17 14:31:59.102462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.102467] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.102470] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.102475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.915 [2024-11-17 14:31:59.102480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.915 [2024-11-17 14:31:59.102483] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.102486] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd60690): datao=0, datal=4096, cccid=7 00:22:09.915 [2024-11-17 14:31:59.102489] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc2b80) on tqpair(0xd60690): expected_datao=0, payload_size=4096 00:22:09.915 [2024-11-17 14:31:59.102493] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.102498] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.102501] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.102506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.915 [2024-11-17 14:31:59.102511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.915 [2024-11-17 14:31:59.102514] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.102517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2880) on tqpair=0xd60690 00:22:09.915 [2024-11-17 14:31:59.102529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.915 [2024-11-17 14:31:59.102534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.915 [2024-11-17 14:31:59.102537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.102540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2700) on tqpair=0xd60690 00:22:09.915 [2024-11-17 14:31:59.102548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.915 [2024-11-17 14:31:59.102553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.915 [2024-11-17 14:31:59.102556] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.915 [2024-11-17 14:31:59.102559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2a00) on tqpair=0xd60690 00:22:09.915 [2024-11-17 14:31:59.102565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.915 [2024-11-17 14:31:59.102570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.915 [2024-11-17 14:31:59.102573] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.916 [2024-11-17 14:31:59.102576] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2b80) on tqpair=0xd60690 00:22:09.916 ===================================================== 00:22:09.916 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:09.916 ===================================================== 00:22:09.916 Controller Capabilities/Features 00:22:09.916 ================================ 00:22:09.916 Vendor ID: 8086 00:22:09.916 Subsystem Vendor ID: 8086 00:22:09.916 Serial Number: SPDK00000000000001 00:22:09.916 Model Number: SPDK bdev Controller 00:22:09.916 Firmware Version: 25.01 00:22:09.916 Recommended Arb Burst: 6 00:22:09.916 IEEE OUI Identifier: e4 d2 5c 00:22:09.916 Multi-path I/O 00:22:09.916 May have multiple subsystem ports: Yes 00:22:09.916 May have multiple controllers: Yes 00:22:09.916 Associated with SR-IOV VF: No 00:22:09.916 Max Data Transfer Size: 131072 00:22:09.916 Max Number of Namespaces: 32 00:22:09.916 Max Number of I/O Queues: 127 00:22:09.916 NVMe Specification Version (VS): 1.3 00:22:09.916 NVMe Specification Version (Identify): 1.3 00:22:09.916 Maximum Queue Entries: 128 00:22:09.916 Contiguous Queues Required: Yes 00:22:09.916 Arbitration Mechanisms Supported 00:22:09.916 Weighted Round Robin: Not Supported 00:22:09.916 Vendor Specific: Not Supported 00:22:09.916 Reset Timeout: 15000 ms 00:22:09.916 Doorbell Stride: 4 bytes 00:22:09.916 NVM Subsystem Reset: Not Supported 00:22:09.916 Command Sets Supported 00:22:09.916 NVM Command Set: Supported 00:22:09.916 Boot Partition: Not Supported 00:22:09.916 Memory Page Size Minimum: 4096 bytes 00:22:09.916 Memory Page Size Maximum: 4096 bytes 00:22:09.916 Persistent Memory Region: Not Supported 00:22:09.916 Optional Asynchronous Events Supported 00:22:09.916 Namespace Attribute Notices: Supported 00:22:09.916 Firmware Activation Notices: Not Supported 00:22:09.916 ANA Change Notices: Not Supported 00:22:09.916 PLE Aggregate Log Change Notices: Not Supported 00:22:09.916 LBA Status Info Alert Notices: Not Supported 00:22:09.916 EGE Aggregate Log Change Notices: Not Supported 00:22:09.916 Normal NVM Subsystem Shutdown event: Not Supported 00:22:09.916 Zone Descriptor Change Notices: Not Supported 00:22:09.916 Discovery Log Change Notices: Not Supported 00:22:09.916 Controller Attributes 00:22:09.916 128-bit Host Identifier: Supported 00:22:09.916 Non-Operational Permissive Mode: Not Supported 00:22:09.916 NVM Sets: Not Supported 00:22:09.916 Read Recovery Levels: Not Supported 00:22:09.916 Endurance Groups: Not Supported 00:22:09.916 Predictable Latency Mode: Not Supported 00:22:09.916 Traffic Based Keep ALive: Not Supported 00:22:09.916 Namespace Granularity: Not Supported 00:22:09.916 SQ Associations: Not Supported 00:22:09.916 UUID List: Not Supported 00:22:09.916 Multi-Domain Subsystem: Not Supported 00:22:09.916 Fixed Capacity Management: Not Supported 00:22:09.916 Variable Capacity Management: Not Supported 00:22:09.916 Delete Endurance Group: Not Supported 00:22:09.916 Delete NVM Set: Not Supported 00:22:09.916 Extended LBA Formats Supported: Not Supported 00:22:09.916 Flexible Data Placement Supported: Not Supported 00:22:09.916 00:22:09.916 Controller Memory Buffer Support 00:22:09.916 ================================ 00:22:09.916 Supported: No 00:22:09.916 00:22:09.916 Persistent Memory Region Support 00:22:09.916 ================================ 00:22:09.916 Supported: No 00:22:09.916 00:22:09.916 Admin Command Set Attributes 00:22:09.916 ============================ 00:22:09.916 Security Send/Receive: Not Supported 00:22:09.916 Format NVM: Not Supported 00:22:09.916 Firmware Activate/Download: Not Supported 00:22:09.916 Namespace Management: Not Supported 00:22:09.916 Device Self-Test: Not Supported 00:22:09.916 Directives: Not Supported 00:22:09.916 NVMe-MI: Not Supported 00:22:09.916 Virtualization Management: Not Supported 00:22:09.916 Doorbell Buffer Config: Not Supported 00:22:09.916 Get LBA Status Capability: Not Supported 00:22:09.916 Command & Feature Lockdown Capability: Not Supported 00:22:09.916 Abort Command Limit: 4 00:22:09.916 Async Event Request Limit: 4 00:22:09.916 Number of Firmware Slots: N/A 00:22:09.916 Firmware Slot 1 Read-Only: N/A 00:22:09.916 Firmware Activation Without Reset: N/A 00:22:09.916 Multiple Update Detection Support: N/A 00:22:09.916 Firmware Update Granularity: No Information Provided 00:22:09.916 Per-Namespace SMART Log: No 00:22:09.916 Asymmetric Namespace Access Log Page: Not Supported 00:22:09.916 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:09.916 Command Effects Log Page: Supported 00:22:09.916 Get Log Page Extended Data: Supported 00:22:09.916 Telemetry Log Pages: Not Supported 00:22:09.916 Persistent Event Log Pages: Not Supported 00:22:09.916 Supported Log Pages Log Page: May Support 00:22:09.916 Commands Supported & Effects Log Page: Not Supported 00:22:09.916 Feature Identifiers & Effects Log Page:May Support 00:22:09.916 NVMe-MI Commands & Effects Log Page: May Support 00:22:09.916 Data Area 4 for Telemetry Log: Not Supported 00:22:09.916 Error Log Page Entries Supported: 128 00:22:09.916 Keep Alive: Supported 00:22:09.916 Keep Alive Granularity: 10000 ms 00:22:09.916 00:22:09.916 NVM Command Set Attributes 00:22:09.916 ========================== 00:22:09.916 Submission Queue Entry Size 00:22:09.916 Max: 64 00:22:09.916 Min: 64 00:22:09.916 Completion Queue Entry Size 00:22:09.916 Max: 16 00:22:09.916 Min: 16 00:22:09.916 Number of Namespaces: 32 00:22:09.916 Compare Command: Supported 00:22:09.916 Write Uncorrectable Command: Not Supported 00:22:09.916 Dataset Management Command: Supported 00:22:09.916 Write Zeroes Command: Supported 00:22:09.916 Set Features Save Field: Not Supported 00:22:09.916 Reservations: Supported 00:22:09.916 Timestamp: Not Supported 00:22:09.916 Copy: Supported 00:22:09.916 Volatile Write Cache: Present 00:22:09.916 Atomic Write Unit (Normal): 1 00:22:09.916 Atomic Write Unit (PFail): 1 00:22:09.916 Atomic Compare & Write Unit: 1 00:22:09.916 Fused Compare & Write: Supported 00:22:09.916 Scatter-Gather List 00:22:09.916 SGL Command Set: Supported 00:22:09.916 SGL Keyed: Supported 00:22:09.916 SGL Bit Bucket Descriptor: Not Supported 00:22:09.916 SGL Metadata Pointer: Not Supported 00:22:09.916 Oversized SGL: Not Supported 00:22:09.916 SGL Metadata Address: Not Supported 00:22:09.916 SGL Offset: Supported 00:22:09.916 Transport SGL Data Block: Not Supported 00:22:09.916 Replay Protected Memory Block: Not Supported 00:22:09.916 00:22:09.916 Firmware Slot Information 00:22:09.916 ========================= 00:22:09.916 Active slot: 1 00:22:09.916 Slot 1 Firmware Revision: 25.01 00:22:09.916 00:22:09.916 00:22:09.916 Commands Supported and Effects 00:22:09.916 ============================== 00:22:09.916 Admin Commands 00:22:09.916 -------------- 00:22:09.916 Get Log Page (02h): Supported 00:22:09.916 Identify (06h): Supported 00:22:09.916 Abort (08h): Supported 00:22:09.916 Set Features (09h): Supported 00:22:09.916 Get Features (0Ah): Supported 00:22:09.916 Asynchronous Event Request (0Ch): Supported 00:22:09.916 Keep Alive (18h): Supported 00:22:09.916 I/O Commands 00:22:09.916 ------------ 00:22:09.916 Flush (00h): Supported LBA-Change 00:22:09.916 Write (01h): Supported LBA-Change 00:22:09.916 Read (02h): Supported 00:22:09.916 Compare (05h): Supported 00:22:09.916 Write Zeroes (08h): Supported LBA-Change 00:22:09.917 Dataset Management (09h): Supported LBA-Change 00:22:09.917 Copy (19h): Supported LBA-Change 00:22:09.917 00:22:09.917 Error Log 00:22:09.917 ========= 00:22:09.917 00:22:09.917 Arbitration 00:22:09.917 =========== 00:22:09.917 Arbitration Burst: 1 00:22:09.917 00:22:09.917 Power Management 00:22:09.917 ================ 00:22:09.917 Number of Power States: 1 00:22:09.917 Current Power State: Power State #0 00:22:09.917 Power State #0: 00:22:09.917 Max Power: 0.00 W 00:22:09.917 Non-Operational State: Operational 00:22:09.917 Entry Latency: Not Reported 00:22:09.917 Exit Latency: Not Reported 00:22:09.917 Relative Read Throughput: 0 00:22:09.917 Relative Read Latency: 0 00:22:09.917 Relative Write Throughput: 0 00:22:09.917 Relative Write Latency: 0 00:22:09.917 Idle Power: Not Reported 00:22:09.917 Active Power: Not Reported 00:22:09.917 Non-Operational Permissive Mode: Not Supported 00:22:09.917 00:22:09.917 Health Information 00:22:09.917 ================== 00:22:09.917 Critical Warnings: 00:22:09.917 Available Spare Space: OK 00:22:09.917 Temperature: OK 00:22:09.917 Device Reliability: OK 00:22:09.917 Read Only: No 00:22:09.917 Volatile Memory Backup: OK 00:22:09.917 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:09.917 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:09.917 Available Spare: 0% 00:22:09.917 Available Spare Threshold: 0% 00:22:09.917 Life Percentage Used:[2024-11-17 14:31:59.102661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.917 [2024-11-17 14:31:59.102666] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd60690) 00:22:09.917 [2024-11-17 14:31:59.102673] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.917 [2024-11-17 14:31:59.102687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2b80, cid 7, qid 0 00:22:09.917 [2024-11-17 14:31:59.102850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.917 [2024-11-17 14:31:59.102856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.917 [2024-11-17 14:31:59.102859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.917 [2024-11-17 14:31:59.102862] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2b80) on tqpair=0xd60690 00:22:09.917 [2024-11-17 14:31:59.102890] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:09.917 [2024-11-17 14:31:59.102900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2100) on tqpair=0xd60690 00:22:09.917 [2024-11-17 14:31:59.102905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.917 [2024-11-17 14:31:59.102910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2280) on tqpair=0xd60690 00:22:09.917 [2024-11-17 14:31:59.102914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.917 [2024-11-17 14:31:59.102918] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2400) on tqpair=0xd60690 00:22:09.917 [2024-11-17 14:31:59.102922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.917 [2024-11-17 14:31:59.102926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2580) on tqpair=0xd60690 00:22:09.917 [2024-11-17 14:31:59.102930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.917 [2024-11-17 14:31:59.102937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.917 [2024-11-17 14:31:59.102940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.917 [2024-11-17 14:31:59.102943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd60690) 00:22:09.917 [2024-11-17 14:31:59.102950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.917 [2024-11-17 14:31:59.102961] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2580, cid 3, qid 0 00:22:09.917 [2024-11-17 14:31:59.103026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.917 [2024-11-17 14:31:59.103031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.917 [2024-11-17 14:31:59.103034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.917 [2024-11-17 14:31:59.103038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2580) on tqpair=0xd60690 00:22:09.917 [2024-11-17 14:31:59.103043] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.917 [2024-11-17 14:31:59.103047] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.917 [2024-11-17 14:31:59.103050] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd60690) 00:22:09.917 [2024-11-17 14:31:59.103055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.917 [2024-11-17 14:31:59.103067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2580, cid 3, qid 0 00:22:09.917 [2024-11-17 14:31:59.103142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.917 [2024-11-17 14:31:59.103147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.917 [2024-11-17 14:31:59.103150] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.917 [2024-11-17 14:31:59.103153] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2580) on tqpair=0xd60690 00:22:09.917 [2024-11-17 14:31:59.103157] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:09.917 [2024-11-17 14:31:59.103161] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:09.917 [2024-11-17 14:31:59.103169] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.917 [2024-11-17 14:31:59.103173] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.917 [2024-11-17 14:31:59.103179] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd60690) 00:22:09.917 [2024-11-17 14:31:59.103185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.917 [2024-11-17 14:31:59.103194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2580, cid 3, qid 0 00:22:09.917 [2024-11-17 14:31:59.103260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.917 [2024-11-17 14:31:59.103265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.917 [2024-11-17 14:31:59.103268] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.917 [2024-11-17 14:31:59.103272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2580) on tqpair=0xd60690 00:22:09.917 [2024-11-17 14:31:59.103280] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.917 [2024-11-17 14:31:59.103283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.917 [2024-11-17 14:31:59.103286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd60690) 00:22:09.917 [2024-11-17 14:31:59.103292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.917 [2024-11-17 14:31:59.103301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2580, cid 3, qid 0 00:22:09.917 [2024-11-17 14:31:59.107362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.917 [2024-11-17 14:31:59.107370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.917 [2024-11-17 14:31:59.107373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.917 [2024-11-17 14:31:59.107376] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2580) on tqpair=0xd60690 00:22:09.917 [2024-11-17 14:31:59.107386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.917 [2024-11-17 14:31:59.107389] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.917 [2024-11-17 14:31:59.107392] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd60690) 00:22:09.917 [2024-11-17 14:31:59.107398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.917 [2024-11-17 14:31:59.107409] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2580, cid 3, qid 0 00:22:09.917 [2024-11-17 14:31:59.107558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.917 [2024-11-17 14:31:59.107565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.917 [2024-11-17 14:31:59.107569] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.917 [2024-11-17 14:31:59.107573] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2580) on tqpair=0xd60690 00:22:09.917 [2024-11-17 14:31:59.107580] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:22:09.917 0% 00:22:09.917 Data Units Read: 0 00:22:09.917 Data Units Written: 0 00:22:09.917 Host Read Commands: 0 00:22:09.917 Host Write Commands: 0 00:22:09.917 Controller Busy Time: 0 minutes 00:22:09.917 Power Cycles: 0 00:22:09.917 Power On Hours: 0 hours 00:22:09.917 Unsafe Shutdowns: 0 00:22:09.917 Unrecoverable Media Errors: 0 00:22:09.917 Lifetime Error Log Entries: 0 00:22:09.917 Warning Temperature Time: 0 minutes 00:22:09.917 Critical Temperature Time: 0 minutes 00:22:09.917 00:22:09.918 Number of Queues 00:22:09.918 ================ 00:22:09.918 Number of I/O Submission Queues: 127 00:22:09.918 Number of I/O Completion Queues: 127 00:22:09.918 00:22:09.918 Active Namespaces 00:22:09.918 ================= 00:22:09.918 Namespace ID:1 00:22:09.918 Error Recovery Timeout: Unlimited 00:22:09.918 Command Set Identifier: NVM (00h) 00:22:09.918 Deallocate: Supported 00:22:09.918 Deallocated/Unwritten Error: Not Supported 00:22:09.918 Deallocated Read Value: Unknown 00:22:09.918 Deallocate in Write Zeroes: Not Supported 00:22:09.918 Deallocated Guard Field: 0xFFFF 00:22:09.918 Flush: Supported 00:22:09.918 Reservation: Supported 00:22:09.918 Namespace Sharing Capabilities: Multiple Controllers 00:22:09.918 Size (in LBAs): 131072 (0GiB) 00:22:09.918 Capacity (in LBAs): 131072 (0GiB) 00:22:09.918 Utilization (in LBAs): 131072 (0GiB) 00:22:09.918 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:09.918 EUI64: ABCDEF0123456789 00:22:09.918 UUID: a9240f20-c84b-4fa0-b09c-3b19c52a9083 00:22:09.918 Thin Provisioning: Not Supported 00:22:09.918 Per-NS Atomic Units: Yes 00:22:09.918 Atomic Boundary Size (Normal): 0 00:22:09.918 Atomic Boundary Size (PFail): 0 00:22:09.918 Atomic Boundary Offset: 0 00:22:09.918 Maximum Single Source Range Length: 65535 00:22:09.918 Maximum Copy Length: 65535 00:22:09.918 Maximum Source Range Count: 1 00:22:09.918 NGUID/EUI64 Never Reused: No 00:22:09.918 Namespace Write Protected: No 00:22:09.918 Number of LBA Formats: 1 00:22:09.918 Current LBA Format: LBA Format #00 00:22:09.918 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:09.918 00:22:09.918 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:10.176 rmmod nvme_tcp 00:22:10.176 rmmod nvme_fabrics 00:22:10.176 rmmod nvme_keyring 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1543689 ']' 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1543689 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1543689 ']' 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1543689 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1543689 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1543689' 00:22:10.176 killing process with pid 1543689 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1543689 00:22:10.176 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1543689 00:22:10.434 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:10.434 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:10.434 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:10.434 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:10.434 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:10.434 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:10.434 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:10.434 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:10.434 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:10.434 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.434 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.434 14:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.341 14:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:12.341 00:22:12.341 real 0m9.931s 00:22:12.341 user 0m7.817s 00:22:12.341 sys 0m4.980s 00:22:12.341 14:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:12.341 14:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.341 ************************************ 00:22:12.341 END TEST nvmf_identify 00:22:12.341 ************************************ 00:22:12.341 14:32:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:12.341 14:32:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:12.341 14:32:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:12.341 14:32:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.601 ************************************ 00:22:12.601 START TEST nvmf_perf 00:22:12.601 ************************************ 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:12.601 * Looking for test storage... 00:22:12.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:12.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.601 --rc genhtml_branch_coverage=1 00:22:12.601 --rc genhtml_function_coverage=1 00:22:12.601 --rc genhtml_legend=1 00:22:12.601 --rc geninfo_all_blocks=1 00:22:12.601 --rc geninfo_unexecuted_blocks=1 00:22:12.601 00:22:12.601 ' 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:12.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.601 --rc genhtml_branch_coverage=1 00:22:12.601 --rc genhtml_function_coverage=1 00:22:12.601 --rc genhtml_legend=1 00:22:12.601 --rc geninfo_all_blocks=1 00:22:12.601 --rc geninfo_unexecuted_blocks=1 00:22:12.601 00:22:12.601 ' 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:12.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.601 --rc genhtml_branch_coverage=1 00:22:12.601 --rc genhtml_function_coverage=1 00:22:12.601 --rc genhtml_legend=1 00:22:12.601 --rc geninfo_all_blocks=1 00:22:12.601 --rc geninfo_unexecuted_blocks=1 00:22:12.601 00:22:12.601 ' 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:12.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.601 --rc genhtml_branch_coverage=1 00:22:12.601 --rc genhtml_function_coverage=1 00:22:12.601 --rc genhtml_legend=1 00:22:12.601 --rc geninfo_all_blocks=1 00:22:12.601 --rc geninfo_unexecuted_blocks=1 00:22:12.601 00:22:12.601 ' 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.601 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:12.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:12.602 14:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:19.223 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:19.223 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:19.223 Found net devices under 0000:86:00.0: cvl_0_0 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:19.223 Found net devices under 0000:86:00.1: cvl_0_1 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:19.223 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:19.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:22:19.224 00:22:19.224 --- 10.0.0.2 ping statistics --- 00:22:19.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.224 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:22:19.224 00:22:19.224 --- 10.0.0.1 ping statistics --- 00:22:19.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.224 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1547462 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1547462 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1547462 ']' 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.224 14:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:19.224 [2024-11-17 14:32:07.816274] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:22:19.224 [2024-11-17 14:32:07.816317] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.224 [2024-11-17 14:32:07.895968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:19.224 [2024-11-17 14:32:07.939524] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.224 [2024-11-17 14:32:07.939560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.224 [2024-11-17 14:32:07.939567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.224 [2024-11-17 14:32:07.939574] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.224 [2024-11-17 14:32:07.939580] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.224 [2024-11-17 14:32:07.941154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.224 [2024-11-17 14:32:07.941263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.224 [2024-11-17 14:32:07.941386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.224 [2024-11-17 14:32:07.941386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.484 14:32:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:19.484 14:32:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:19.484 14:32:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:19.484 14:32:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:19.484 14:32:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:19.484 14:32:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.484 14:32:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:19.484 14:32:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:22.773 14:32:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:22.773 14:32:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:22.773 14:32:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:22.774 14:32:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:23.032 14:32:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:23.032 14:32:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:23.032 14:32:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:23.032 14:32:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:23.032 14:32:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:23.291 [2024-11-17 14:32:12.345912] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.292 14:32:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:23.550 14:32:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:23.550 14:32:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:23.810 14:32:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:23.810 14:32:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:23.810 14:32:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:24.069 [2024-11-17 14:32:13.174377] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.069 14:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:24.328 14:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:24.328 14:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:24.328 14:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:24.328 14:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:25.707 Initializing NVMe Controllers 00:22:25.707 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:25.707 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:25.707 Initialization complete. Launching workers. 00:22:25.707 ======================================================== 00:22:25.707 Latency(us) 00:22:25.707 Device Information : IOPS MiB/s Average min max 00:22:25.707 PCIE (0000:5e:00.0) NSID 1 from core 0: 97580.66 381.17 327.39 35.43 5701.80 00:22:25.707 ======================================================== 00:22:25.707 Total : 97580.66 381.17 327.39 35.43 5701.80 00:22:25.707 00:22:25.707 14:32:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:27.087 Initializing NVMe Controllers 00:22:27.087 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:27.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:27.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:27.087 Initialization complete. Launching workers. 00:22:27.087 ======================================================== 00:22:27.087 Latency(us) 00:22:27.087 Device Information : IOPS MiB/s Average min max 00:22:27.087 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 92.00 0.36 11083.66 115.31 44942.93 00:22:27.087 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 20463.34 7151.75 47888.84 00:22:27.087 ======================================================== 00:22:27.087 Total : 143.00 0.56 14428.86 115.31 47888.84 00:22:27.087 00:22:27.087 14:32:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:28.466 Initializing NVMe Controllers 00:22:28.466 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:28.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:28.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:28.466 Initialization complete. Launching workers. 00:22:28.466 ======================================================== 00:22:28.466 Latency(us) 00:22:28.466 Device Information : IOPS MiB/s Average min max 00:22:28.466 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10942.00 42.74 2925.17 391.31 8682.98 00:22:28.466 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3909.00 15.27 8225.98 7056.61 15745.91 00:22:28.466 ======================================================== 00:22:28.466 Total : 14851.00 58.01 4320.42 391.31 15745.91 00:22:28.466 00:22:28.466 14:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:28.466 14:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:28.466 14:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:30.999 Initializing NVMe Controllers 00:22:30.999 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:30.999 Controller IO queue size 128, less than required. 00:22:30.999 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:30.999 Controller IO queue size 128, less than required. 00:22:30.999 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:30.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:30.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:30.999 Initialization complete. Launching workers. 00:22:30.999 ======================================================== 00:22:30.999 Latency(us) 00:22:30.999 Device Information : IOPS MiB/s Average min max 00:22:30.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1760.36 440.09 74087.79 54806.72 127994.11 00:22:30.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 612.60 153.15 216082.62 80258.66 323594.76 00:22:30.999 ======================================================== 00:22:30.999 Total : 2372.96 593.24 110745.12 54806.72 323594.76 00:22:31.000 00:22:31.000 14:32:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:31.568 No valid NVMe controllers or AIO or URING devices found 00:22:31.569 Initializing NVMe Controllers 00:22:31.569 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:31.569 Controller IO queue size 128, less than required. 00:22:31.569 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.569 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:31.569 Controller IO queue size 128, less than required. 00:22:31.569 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.569 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:31.569 WARNING: Some requested NVMe devices were skipped 00:22:31.569 14:32:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:34.106 Initializing NVMe Controllers 00:22:34.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:34.106 Controller IO queue size 128, less than required. 00:22:34.106 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:34.106 Controller IO queue size 128, less than required. 00:22:34.106 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:34.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:34.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:34.106 Initialization complete. Launching workers. 00:22:34.106 00:22:34.106 ==================== 00:22:34.106 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:34.106 TCP transport: 00:22:34.106 polls: 10654 00:22:34.106 idle_polls: 7225 00:22:34.106 sock_completions: 3429 00:22:34.106 nvme_completions: 6333 00:22:34.106 submitted_requests: 9460 00:22:34.106 queued_requests: 1 00:22:34.106 00:22:34.106 ==================== 00:22:34.106 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:34.106 TCP transport: 00:22:34.106 polls: 10495 00:22:34.106 idle_polls: 6934 00:22:34.106 sock_completions: 3561 00:22:34.106 nvme_completions: 6557 00:22:34.106 submitted_requests: 9806 00:22:34.106 queued_requests: 1 00:22:34.106 ======================================================== 00:22:34.106 Latency(us) 00:22:34.106 Device Information : IOPS MiB/s Average min max 00:22:34.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1582.08 395.52 82658.48 53466.38 135091.51 00:22:34.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1638.05 409.51 78948.91 47742.48 120720.29 00:22:34.106 ======================================================== 00:22:34.106 Total : 3220.14 805.03 80771.46 47742.48 135091.51 00:22:34.106 00:22:34.106 14:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:34.106 14:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:34.106 rmmod nvme_tcp 00:22:34.106 rmmod nvme_fabrics 00:22:34.106 rmmod nvme_keyring 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1547462 ']' 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1547462 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1547462 ']' 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1547462 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1547462 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1547462' 00:22:34.106 killing process with pid 1547462 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1547462 00:22:34.106 14:32:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1547462 00:22:36.014 14:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:36.014 14:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:36.014 14:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:36.014 14:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:36.014 14:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:36.014 14:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:36.014 14:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:36.014 14:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:36.014 14:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:36.014 14:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.014 14:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.014 14:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.097 14:32:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:38.097 00:22:38.097 real 0m25.257s 00:22:38.097 user 1m7.040s 00:22:38.097 sys 0m8.404s 00:22:38.097 14:32:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:38.097 14:32:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:38.097 ************************************ 00:22:38.097 END TEST nvmf_perf 00:22:38.097 ************************************ 00:22:38.097 14:32:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:38.097 14:32:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:38.097 14:32:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:38.097 14:32:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.097 ************************************ 00:22:38.097 START TEST nvmf_fio_host 00:22:38.097 ************************************ 00:22:38.097 14:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:38.097 * Looking for test storage... 00:22:38.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:38.097 14:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:38.097 14:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:38.097 14:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:38.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.097 --rc genhtml_branch_coverage=1 00:22:38.097 --rc genhtml_function_coverage=1 00:22:38.097 --rc genhtml_legend=1 00:22:38.097 --rc geninfo_all_blocks=1 00:22:38.097 --rc geninfo_unexecuted_blocks=1 00:22:38.097 00:22:38.097 ' 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:38.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.097 --rc genhtml_branch_coverage=1 00:22:38.097 --rc genhtml_function_coverage=1 00:22:38.097 --rc genhtml_legend=1 00:22:38.097 --rc geninfo_all_blocks=1 00:22:38.097 --rc geninfo_unexecuted_blocks=1 00:22:38.097 00:22:38.097 ' 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:38.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.097 --rc genhtml_branch_coverage=1 00:22:38.097 --rc genhtml_function_coverage=1 00:22:38.097 --rc genhtml_legend=1 00:22:38.097 --rc geninfo_all_blocks=1 00:22:38.097 --rc geninfo_unexecuted_blocks=1 00:22:38.097 00:22:38.097 ' 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:38.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.097 --rc genhtml_branch_coverage=1 00:22:38.097 --rc genhtml_function_coverage=1 00:22:38.097 --rc genhtml_legend=1 00:22:38.097 --rc geninfo_all_blocks=1 00:22:38.097 --rc geninfo_unexecuted_blocks=1 00:22:38.097 00:22:38.097 ' 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.097 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:38.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:38.098 14:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.679 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:44.680 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:44.680 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:44.680 Found net devices under 0000:86:00.0: cvl_0_0 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:44.680 Found net devices under 0000:86:00.1: cvl_0_1 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:44.680 14:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:44.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:22:44.680 00:22:44.680 --- 10.0.0.2 ping statistics --- 00:22:44.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.680 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:22:44.680 00:22:44.680 --- 10.0.0.1 ping statistics --- 00:22:44.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.680 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1553800 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1553800 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1553800 ']' 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.680 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.680 [2024-11-17 14:32:33.158513] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:22:44.680 [2024-11-17 14:32:33.158555] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.680 [2024-11-17 14:32:33.223504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:44.680 [2024-11-17 14:32:33.266578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.681 [2024-11-17 14:32:33.266616] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.681 [2024-11-17 14:32:33.266624] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.681 [2024-11-17 14:32:33.266630] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.681 [2024-11-17 14:32:33.266635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.681 [2024-11-17 14:32:33.271370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.681 [2024-11-17 14:32:33.271414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.681 [2024-11-17 14:32:33.271521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.681 [2024-11-17 14:32:33.271522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:44.681 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.681 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:44.681 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:44.681 [2024-11-17 14:32:33.536099] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.681 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:44.681 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:44.681 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.681 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:44.681 Malloc1 00:22:44.681 14:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:44.939 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:45.199 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:45.458 [2024-11-17 14:32:34.420412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.458 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:45.458 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:45.458 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:45.458 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:45.458 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:45.458 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:45.458 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:45.458 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:45.458 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:45.458 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:45.458 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:45.458 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:45.458 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:45.458 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:45.458 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:45.458 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:45.458 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:45.458 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:45.458 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:45.458 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:45.719 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:45.719 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:45.719 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:45.719 14:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:45.977 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:45.977 fio-3.35 00:22:45.977 Starting 1 thread 00:22:48.521 [2024-11-17 14:32:37.414855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec93d0 is same with the state(6) to be set 00:22:48.521 00:22:48.521 test: (groupid=0, jobs=1): err= 0: pid=1554180: Sun Nov 17 14:32:37 2024 00:22:48.521 read: IOPS=11.7k, BW=45.5MiB/s (47.8MB/s)(91.3MiB/2005msec) 00:22:48.521 slat (nsec): min=1562, max=370077, avg=1827.21, stdev=3313.35 00:22:48.521 clat (usec): min=3155, max=10793, avg=6056.80, stdev=468.67 00:22:48.521 lat (usec): min=3189, max=10794, avg=6058.62, stdev=468.45 00:22:48.521 clat percentiles (usec): 00:22:48.521 | 1.00th=[ 4948], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:22:48.521 | 30.00th=[ 5866], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6194], 00:22:48.521 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6783], 00:22:48.521 | 99.00th=[ 7046], 99.50th=[ 7177], 99.90th=[ 8455], 99.95th=[ 9765], 00:22:48.521 | 99.99th=[10683] 00:22:48.521 bw ( KiB/s): min=45536, max=47320, per=99.95%, avg=46614.00, stdev=763.36, samples=4 00:22:48.521 iops : min=11384, max=11830, avg=11653.50, stdev=190.84, samples=4 00:22:48.521 write: IOPS=11.6k, BW=45.2MiB/s (47.4MB/s)(90.7MiB/2005msec); 0 zone resets 00:22:48.521 slat (nsec): min=1614, max=168619, avg=1890.92, stdev=1599.51 00:22:48.521 clat (usec): min=2625, max=9378, avg=4900.87, stdev=378.86 00:22:48.521 lat (usec): min=2641, max=9380, avg=4902.76, stdev=378.73 00:22:48.521 clat percentiles (usec): 00:22:48.521 | 1.00th=[ 4047], 5.00th=[ 4293], 10.00th=[ 4490], 20.00th=[ 4621], 00:22:48.521 | 30.00th=[ 4686], 40.00th=[ 4817], 50.00th=[ 4883], 60.00th=[ 5014], 00:22:48.521 | 70.00th=[ 5080], 80.00th=[ 5211], 90.00th=[ 5342], 95.00th=[ 5473], 00:22:48.521 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 8029], 99.95th=[ 8455], 00:22:48.521 | 99.99th=[ 9372] 00:22:48.521 bw ( KiB/s): min=45896, max=46816, per=100.00%, avg=46306.00, stdev=413.60, samples=4 00:22:48.521 iops : min=11474, max=11704, avg=11576.50, stdev=103.40, samples=4 00:22:48.521 lat (msec) : 4=0.45%, 10=99.53%, 20=0.01% 00:22:48.521 cpu : usr=70.66%, sys=26.10%, ctx=557, majf=0, minf=3 00:22:48.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:48.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:48.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:48.521 issued rwts: total=23377,23210,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:48.521 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:48.521 00:22:48.521 Run status group 0 (all jobs): 00:22:48.521 READ: bw=45.5MiB/s (47.8MB/s), 45.5MiB/s-45.5MiB/s (47.8MB/s-47.8MB/s), io=91.3MiB (95.8MB), run=2005-2005msec 00:22:48.521 WRITE: bw=45.2MiB/s (47.4MB/s), 45.2MiB/s-45.2MiB/s (47.4MB/s-47.4MB/s), io=90.7MiB (95.1MB), run=2005-2005msec 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:48.521 14:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:48.780 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:48.780 fio-3.35 00:22:48.780 Starting 1 thread 00:22:51.303 00:22:51.303 test: (groupid=0, jobs=1): err= 0: pid=1554759: Sun Nov 17 14:32:40 2024 00:22:51.303 read: IOPS=10.7k, BW=167MiB/s (175MB/s)(334MiB/2006msec) 00:22:51.303 slat (nsec): min=2575, max=84145, avg=2840.33, stdev=1216.62 00:22:51.303 clat (usec): min=1866, max=49291, avg=7040.82, stdev=3373.56 00:22:51.303 lat (usec): min=1868, max=49294, avg=7043.66, stdev=3373.59 00:22:51.303 clat percentiles (usec): 00:22:51.303 | 1.00th=[ 3752], 5.00th=[ 4293], 10.00th=[ 4817], 20.00th=[ 5473], 00:22:51.303 | 30.00th=[ 5997], 40.00th=[ 6390], 50.00th=[ 6849], 60.00th=[ 7308], 00:22:51.303 | 70.00th=[ 7635], 80.00th=[ 7963], 90.00th=[ 8717], 95.00th=[ 9372], 00:22:51.303 | 99.00th=[11469], 99.50th=[43254], 99.90th=[47973], 99.95th=[49021], 00:22:51.303 | 99.99th=[49021] 00:22:51.303 bw ( KiB/s): min=74880, max=96832, per=50.58%, avg=86304.00, stdev=9723.30, samples=4 00:22:51.303 iops : min= 4680, max= 6052, avg=5394.00, stdev=607.71, samples=4 00:22:51.303 write: IOPS=6397, BW=100.0MiB/s (105MB/s)(176MiB/1761msec); 0 zone resets 00:22:51.303 slat (usec): min=30, max=389, avg=31.79, stdev= 6.67 00:22:51.303 clat (usec): min=4233, max=15524, avg=8694.57, stdev=1548.79 00:22:51.303 lat (usec): min=4263, max=15555, avg=8726.36, stdev=1549.57 00:22:51.303 clat percentiles (usec): 00:22:51.303 | 1.00th=[ 5800], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 7439], 00:22:51.303 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:22:51.303 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[10814], 95.00th=[11600], 00:22:51.303 | 99.00th=[12780], 99.50th=[13304], 99.90th=[14746], 99.95th=[15270], 00:22:51.303 | 99.99th=[15533] 00:22:51.303 bw ( KiB/s): min=78592, max=100800, per=87.48%, avg=89544.00, stdev=9802.67, samples=4 00:22:51.303 iops : min= 4912, max= 6300, avg=5596.50, stdev=612.67, samples=4 00:22:51.303 lat (msec) : 2=0.01%, 4=1.42%, 10=89.78%, 20=8.40%, 50=0.39% 00:22:51.303 cpu : usr=85.14%, sys=14.11%, ctx=37, majf=0, minf=3 00:22:51.303 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:22:51.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:51.303 issued rwts: total=21393,11266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.303 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:51.303 00:22:51.303 Run status group 0 (all jobs): 00:22:51.303 READ: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=334MiB (351MB), run=2006-2006msec 00:22:51.303 WRITE: bw=100.0MiB/s (105MB/s), 100.0MiB/s-100.0MiB/s (105MB/s-105MB/s), io=176MiB (185MB), run=1761-1761msec 00:22:51.303 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:51.303 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:51.303 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:51.303 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:51.303 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:51.303 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:51.303 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:51.303 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:51.303 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:51.303 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:51.303 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:51.303 rmmod nvme_tcp 00:22:51.303 rmmod nvme_fabrics 00:22:51.303 rmmod nvme_keyring 00:22:51.303 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:51.303 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:51.303 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:51.303 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1553800 ']' 00:22:51.303 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1553800 00:22:51.303 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1553800 ']' 00:22:51.303 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1553800 00:22:51.303 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:51.303 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:51.303 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1553800 00:22:51.562 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:51.562 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:51.562 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1553800' 00:22:51.562 killing process with pid 1553800 00:22:51.562 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1553800 00:22:51.563 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1553800 00:22:51.563 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:51.563 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:51.563 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:51.563 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:51.563 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:51.563 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:51.563 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:51.563 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:51.563 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:51.563 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.563 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.563 14:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.100 14:32:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:54.100 00:22:54.100 real 0m15.913s 00:22:54.100 user 0m46.501s 00:22:54.100 sys 0m6.576s 00:22:54.100 14:32:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:54.100 14:32:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.100 ************************************ 00:22:54.100 END TEST nvmf_fio_host 00:22:54.100 ************************************ 00:22:54.100 14:32:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:54.100 14:32:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:54.100 14:32:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:54.100 14:32:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.100 ************************************ 00:22:54.100 START TEST nvmf_failover 00:22:54.100 ************************************ 00:22:54.100 14:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:54.100 * Looking for test storage... 00:22:54.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:54.100 14:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:54.100 14:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:22:54.100 14:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:54.100 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:54.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.100 --rc genhtml_branch_coverage=1 00:22:54.100 --rc genhtml_function_coverage=1 00:22:54.100 --rc genhtml_legend=1 00:22:54.100 --rc geninfo_all_blocks=1 00:22:54.100 --rc geninfo_unexecuted_blocks=1 00:22:54.100 00:22:54.100 ' 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:54.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.101 --rc genhtml_branch_coverage=1 00:22:54.101 --rc genhtml_function_coverage=1 00:22:54.101 --rc genhtml_legend=1 00:22:54.101 --rc geninfo_all_blocks=1 00:22:54.101 --rc geninfo_unexecuted_blocks=1 00:22:54.101 00:22:54.101 ' 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:54.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.101 --rc genhtml_branch_coverage=1 00:22:54.101 --rc genhtml_function_coverage=1 00:22:54.101 --rc genhtml_legend=1 00:22:54.101 --rc geninfo_all_blocks=1 00:22:54.101 --rc geninfo_unexecuted_blocks=1 00:22:54.101 00:22:54.101 ' 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:54.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.101 --rc genhtml_branch_coverage=1 00:22:54.101 --rc genhtml_function_coverage=1 00:22:54.101 --rc genhtml_legend=1 00:22:54.101 --rc geninfo_all_blocks=1 00:22:54.101 --rc geninfo_unexecuted_blocks=1 00:22:54.101 00:22:54.101 ' 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:54.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:54.101 14:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:00.670 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.670 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:00.670 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:00.671 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:00.671 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:00.671 Found net devices under 0000:86:00.0: cvl_0_0 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:00.671 Found net devices under 0000:86:00.1: cvl_0_1 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:00.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:23:00.671 00:23:00.671 --- 10.0.0.2 ping statistics --- 00:23:00.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.671 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:23:00.671 00:23:00.671 --- 10.0.0.1 ping statistics --- 00:23:00.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.671 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:00.671 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.672 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:00.672 14:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1558739 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1558739 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1558739 ']' 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:00.672 [2024-11-17 14:32:49.091718] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:23:00.672 [2024-11-17 14:32:49.091771] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.672 [2024-11-17 14:32:49.172574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:00.672 [2024-11-17 14:32:49.214900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.672 [2024-11-17 14:32:49.214936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.672 [2024-11-17 14:32:49.214943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.672 [2024-11-17 14:32:49.214949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.672 [2024-11-17 14:32:49.214955] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.672 [2024-11-17 14:32:49.216334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.672 [2024-11-17 14:32:49.216452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.672 [2024-11-17 14:32:49.216453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:00.672 [2024-11-17 14:32:49.529732] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:00.672 Malloc0 00:23:00.672 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:00.929 14:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:01.187 14:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:01.187 [2024-11-17 14:32:50.380151] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.445 14:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:01.445 [2024-11-17 14:32:50.580697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:01.445 14:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:01.702 [2024-11-17 14:32:50.773288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:01.702 14:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:01.702 14:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1559002 00:23:01.702 14:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:01.702 14:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1559002 /var/tmp/bdevperf.sock 00:23:01.702 14:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1559002 ']' 00:23:01.702 14:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.702 14:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:01.702 14:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.702 14:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:01.702 14:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:01.960 14:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:01.960 14:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:01.960 14:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:02.217 NVMe0n1 00:23:02.217 14:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:02.781 00:23:02.781 14:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:02.781 14:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1559229 00:23:02.781 14:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:03.712 14:32:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:03.969 14:32:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:07.247 14:32:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:07.504 00:23:07.504 14:32:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:07.504 [2024-11-17 14:32:56.687767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcc0d0 is same with the state(6) to be set 00:23:07.504 [2024-11-17 14:32:56.687815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcc0d0 is same with the state(6) to be set 00:23:07.504 [2024-11-17 14:32:56.687824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcc0d0 is same with the state(6) to be set 00:23:07.504 [2024-11-17 14:32:56.687830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcc0d0 is same with the state(6) to be set 00:23:07.504 [2024-11-17 14:32:56.687837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcc0d0 is same with the state(6) to be set 00:23:07.504 [2024-11-17 14:32:56.687843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcc0d0 is same with the state(6) to be set 00:23:07.504 [2024-11-17 14:32:56.687849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcc0d0 is same with the state(6) to be set 00:23:07.504 [2024-11-17 14:32:56.687855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcc0d0 is same with the state(6) to be set 00:23:07.504 [2024-11-17 14:32:56.687861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcc0d0 is same with the state(6) to be set 00:23:07.504 [2024-11-17 14:32:56.687867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcc0d0 is same with the state(6) to be set 00:23:07.504 [2024-11-17 14:32:56.687873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcc0d0 is same with the state(6) to be set 00:23:07.504 [2024-11-17 14:32:56.687879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcc0d0 is same with the state(6) to be set 00:23:07.504 [2024-11-17 14:32:56.687885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcc0d0 is same with the state(6) to be set 00:23:07.504 [2024-11-17 14:32:56.687891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcc0d0 is same with the state(6) to be set 00:23:07.504 [2024-11-17 14:32:56.687897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcc0d0 is same with the state(6) to be set 00:23:07.504 [2024-11-17 14:32:56.687903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcc0d0 is same with the state(6) to be set 00:23:07.504 [2024-11-17 14:32:56.687909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcc0d0 is same with the state(6) to be set 00:23:07.504 [2024-11-17 14:32:56.687916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcc0d0 is same with the state(6) to be set 00:23:07.504 14:32:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:10.780 14:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:10.780 [2024-11-17 14:32:59.907775] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.780 14:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:12.156 14:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:12.156 [2024-11-17 14:33:01.140878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.140916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.140925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.140932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.140938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.140944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.140951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.140957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.140963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.140969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.140975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.140981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.140987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.140993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.140999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.141006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.141013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.141019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.141025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.141031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.141037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.141044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.141051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.141057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.141064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.141070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.141082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.141089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.141096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.141102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.141109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.141115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.141121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.141128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.141138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 [2024-11-17 14:33:01.141144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcce30 is same with the state(6) to be set 00:23:12.156 14:33:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1559229 00:23:18.725 { 00:23:18.725 "results": [ 00:23:18.725 { 00:23:18.725 "job": "NVMe0n1", 00:23:18.725 "core_mask": "0x1", 00:23:18.725 "workload": "verify", 00:23:18.725 "status": "finished", 00:23:18.725 "verify_range": { 00:23:18.725 "start": 0, 00:23:18.725 "length": 16384 00:23:18.725 }, 00:23:18.725 "queue_depth": 128, 00:23:18.725 "io_size": 4096, 00:23:18.725 "runtime": 15.003576, 00:23:18.725 "iops": 10891.070235522518, 00:23:18.725 "mibps": 42.54324310750984, 00:23:18.725 "io_failed": 14877, 00:23:18.725 "io_timeout": 0, 00:23:18.725 "avg_latency_us": 10749.796233597677, 00:23:18.725 "min_latency_us": 623.304347826087, 00:23:18.725 "max_latency_us": 20857.544347826086 00:23:18.725 } 00:23:18.725 ], 00:23:18.725 "core_count": 1 00:23:18.725 } 00:23:18.725 14:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1559002 00:23:18.725 14:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1559002 ']' 00:23:18.725 14:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1559002 00:23:18.725 14:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:18.726 14:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.726 14:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1559002 00:23:18.726 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:18.726 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:18.726 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1559002' 00:23:18.726 killing process with pid 1559002 00:23:18.726 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1559002 00:23:18.726 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1559002 00:23:18.726 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:18.726 [2024-11-17 14:32:50.848781] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:23:18.726 [2024-11-17 14:32:50.848835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1559002 ] 00:23:18.726 [2024-11-17 14:32:50.923928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.726 [2024-11-17 14:32:50.965676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.726 Running I/O for 15 seconds... 00:23:18.726 10992.00 IOPS, 42.94 MiB/s [2024-11-17T13:33:07.951Z] [2024-11-17 14:32:53.005213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.726 [2024-11-17 14:32:53.005637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.726 [2024-11-17 14:32:53.005652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.726 [2024-11-17 14:32:53.005661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.726 [2024-11-17 14:32:53.005667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.005675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.005682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.005690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.005698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.005706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.005713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.005721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.005728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.005736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.005743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.005751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.727 [2024-11-17 14:32:53.005758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.005767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.727 [2024-11-17 14:32:53.005775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.005783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.727 [2024-11-17 14:32:53.005789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.005798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.727 [2024-11-17 14:32:53.005804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.005812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.727 [2024-11-17 14:32:53.005819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.005829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.727 [2024-11-17 14:32:53.005836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.005852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.727 [2024-11-17 14:32:53.005860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.005868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.727 [2024-11-17 14:32:53.005875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.005884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.727 [2024-11-17 14:32:53.005890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.005899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.005906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.005915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.005922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.005930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.005937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.005952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.005959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.005967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.005975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.005983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.005989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.005998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.006005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.006013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.006020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.006029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.006036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.006049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.006056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.006064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.006072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.006080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.006088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.006096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.006103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.006111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.006118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.006127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.006133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.006141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.727 [2024-11-17 14:32:53.006148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.006156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.006163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.006172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.006178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.006187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.006194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.006202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.006209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.006217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.006224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.006232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.006240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.006248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.727 [2024-11-17 14:32:53.006255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.727 [2024-11-17 14:32:53.006263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.728 [2024-11-17 14:32:53.006795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.728 [2024-11-17 14:32:53.006803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.006809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.006819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.006825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.006834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.006841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.006849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.006856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.006864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.006871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.006879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.006886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.006894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.006901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.006909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.006916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.006924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.006930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.006939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.006945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.006953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.006960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.006968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.006975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.006984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.006991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.006999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.007006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.007018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.007025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.007033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.007039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.007047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.007054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.007062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.007069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.007077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.007084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.007092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.007099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.007107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.007114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.007122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.007129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.007137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.007144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.007152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.007159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.007167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.007177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.007186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.007192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.007201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.007209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.007218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.007224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.007232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.729 [2024-11-17 14:32:53.007239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.007247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9d70 is same with the state(6) to be set 00:23:18.729 [2024-11-17 14:32:53.007257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.729 [2024-11-17 14:32:53.007264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.729 [2024-11-17 14:32:53.007271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99024 len:8 PRP1 0x0 PRP2 0x0 00:23:18.729 [2024-11-17 14:32:53.007279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.007330] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:18.729 [2024-11-17 14:32:53.007358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.729 [2024-11-17 14:32:53.007367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.007375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.729 [2024-11-17 14:32:53.007382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.007389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.729 [2024-11-17 14:32:53.007397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.007404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.729 [2024-11-17 14:32:53.007411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.729 [2024-11-17 14:32:53.007418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:18.729 [2024-11-17 14:32:53.010278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:18.729 [2024-11-17 14:32:53.010305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac5340 (9): Bad file descriptor 00:23:18.729 [2024-11-17 14:32:53.194312] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:18.730 10053.50 IOPS, 39.27 MiB/s [2024-11-17T13:33:07.955Z] 10395.00 IOPS, 40.61 MiB/s [2024-11-17T13:33:07.955Z] 10612.00 IOPS, 41.45 MiB/s [2024-11-17T13:33:07.955Z] [2024-11-17 14:32:56.688666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.730 [2024-11-17 14:32:56.688700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.688715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.730 [2024-11-17 14:32:56.688729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.688739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.730 [2024-11-17 14:32:56.688747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.688757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.730 [2024-11-17 14:32:56.688763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.688773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.730 [2024-11-17 14:32:56.688781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.688790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.730 [2024-11-17 14:32:56.688798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.688808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.730 [2024-11-17 14:32:56.688815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.688823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.730 [2024-11-17 14:32:56.688831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.688839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.730 [2024-11-17 14:32:56.688845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.688853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.730 [2024-11-17 14:32:56.688860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.688868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.730 [2024-11-17 14:32:56.688875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.688883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.730 [2024-11-17 14:32:56.688890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.688899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.730 [2024-11-17 14:32:56.688907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.688915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.730 [2024-11-17 14:32:56.688922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.688933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.730 [2024-11-17 14:32:56.688940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.688948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.730 [2024-11-17 14:32:56.688964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.688972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.730 [2024-11-17 14:32:56.688979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.688988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.730 [2024-11-17 14:32:56.688995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.689004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.730 [2024-11-17 14:32:56.689011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.689019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.730 [2024-11-17 14:32:56.689027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.689036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.730 [2024-11-17 14:32:56.689043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.689052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.730 [2024-11-17 14:32:56.689059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.689067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.730 [2024-11-17 14:32:56.689075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.689084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.730 [2024-11-17 14:32:56.689091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.689099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.730 [2024-11-17 14:32:56.689106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.689115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.730 [2024-11-17 14:32:56.689122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.689129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.730 [2024-11-17 14:32:56.689139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.730 [2024-11-17 14:32:56.689147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-17 14:32:56.689617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-17 14:32:56.689624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.732 [2024-11-17 14:32:56.689640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.732 [2024-11-17 14:32:56.689656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.732 [2024-11-17 14:32:56.689671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.732 [2024-11-17 14:32:56.689686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.732 [2024-11-17 14:32:56.689702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.732 [2024-11-17 14:32:56.689717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.732 [2024-11-17 14:32:56.689733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.732 [2024-11-17 14:32:56.689748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.732 [2024-11-17 14:32:56.689764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.732 [2024-11-17 14:32:56.689778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.732 [2024-11-17 14:32:56.689798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.732 [2024-11-17 14:32:56.689813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.732 [2024-11-17 14:32:56.689828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.732 [2024-11-17 14:32:56.689843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.732 [2024-11-17 14:32:56.689859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.732 [2024-11-17 14:32:56.689874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.732 [2024-11-17 14:32:56.689889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.732 [2024-11-17 14:32:56.689904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.732 [2024-11-17 14:32:56.689919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.732 [2024-11-17 14:32:56.689935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.732 [2024-11-17 14:32:56.689949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.732 [2024-11-17 14:32:56.689965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.732 [2024-11-17 14:32:56.689980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.689988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.732 [2024-11-17 14:32:56.689994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.690003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.732 [2024-11-17 14:32:56.690009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.690018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.732 [2024-11-17 14:32:56.690024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.690033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.732 [2024-11-17 14:32:56.690040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.732 [2024-11-17 14:32:56.690048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.733 [2024-11-17 14:32:56.690055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.733 [2024-11-17 14:32:56.690071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.733 [2024-11-17 14:32:56.690086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.733 [2024-11-17 14:32:56.690101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.733 [2024-11-17 14:32:56.690116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.733 [2024-11-17 14:32:56.690133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.733 [2024-11-17 14:32:56.690148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.733 [2024-11-17 14:32:56.690163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.733 [2024-11-17 14:32:56.690178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.733 [2024-11-17 14:32:56.690192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.733 [2024-11-17 14:32:56.690207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.733 [2024-11-17 14:32:56.690222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.733 [2024-11-17 14:32:56.690237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.733 [2024-11-17 14:32:56.690252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.733 [2024-11-17 14:32:56.690267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.733 [2024-11-17 14:32:56.690282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.733 [2024-11-17 14:32:56.690299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.733 [2024-11-17 14:32:56.690316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.733 [2024-11-17 14:32:56.690330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.733 [2024-11-17 14:32:56.690368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87832 len:8 PRP1 0x0 PRP2 0x0 00:23:18.733 [2024-11-17 14:32:56.690375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.733 [2024-11-17 14:32:56.690391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.733 [2024-11-17 14:32:56.690397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87840 len:8 PRP1 0x0 PRP2 0x0 00:23:18.733 [2024-11-17 14:32:56.690404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.733 [2024-11-17 14:32:56.690416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.733 [2024-11-17 14:32:56.690422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87848 len:8 PRP1 0x0 PRP2 0x0 00:23:18.733 [2024-11-17 14:32:56.690429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.733 [2024-11-17 14:32:56.690441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.733 [2024-11-17 14:32:56.690447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87856 len:8 PRP1 0x0 PRP2 0x0 00:23:18.733 [2024-11-17 14:32:56.690453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.733 [2024-11-17 14:32:56.690465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.733 [2024-11-17 14:32:56.690472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87864 len:8 PRP1 0x0 PRP2 0x0 00:23:18.733 [2024-11-17 14:32:56.690478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.733 [2024-11-17 14:32:56.690490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.733 [2024-11-17 14:32:56.690496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87872 len:8 PRP1 0x0 PRP2 0x0 00:23:18.733 [2024-11-17 14:32:56.690503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.733 [2024-11-17 14:32:56.690516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.733 [2024-11-17 14:32:56.690521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87880 len:8 PRP1 0x0 PRP2 0x0 00:23:18.733 [2024-11-17 14:32:56.690528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.733 [2024-11-17 14:32:56.690544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.733 [2024-11-17 14:32:56.690550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87016 len:8 PRP1 0x0 PRP2 0x0 00:23:18.733 [2024-11-17 14:32:56.690556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.733 [2024-11-17 14:32:56.690569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.733 [2024-11-17 14:32:56.690574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87024 len:8 PRP1 0x0 PRP2 0x0 00:23:18.733 [2024-11-17 14:32:56.690587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.733 [2024-11-17 14:32:56.690599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.733 [2024-11-17 14:32:56.690605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87032 len:8 PRP1 0x0 PRP2 0x0 00:23:18.733 [2024-11-17 14:32:56.690612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.733 [2024-11-17 14:32:56.690619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.733 [2024-11-17 14:32:56.690624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.733 [2024-11-17 14:32:56.690630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87040 len:8 PRP1 0x0 PRP2 0x0 00:23:18.734 [2024-11-17 14:32:56.690636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.734 [2024-11-17 14:32:56.690643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.734 [2024-11-17 14:32:56.690648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.734 [2024-11-17 14:32:56.690655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87048 len:8 PRP1 0x0 PRP2 0x0 00:23:18.734 [2024-11-17 14:32:56.690662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.734 [2024-11-17 14:32:56.690669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.734 [2024-11-17 14:32:56.690674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.734 [2024-11-17 14:32:56.690680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87056 len:8 PRP1 0x0 PRP2 0x0 00:23:18.734 [2024-11-17 14:32:56.690687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.734 [2024-11-17 14:32:56.690694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.734 [2024-11-17 14:32:56.690699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.734 [2024-11-17 14:32:56.690705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87064 len:8 PRP1 0x0 PRP2 0x0 00:23:18.734 [2024-11-17 14:32:56.690712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.734 [2024-11-17 14:32:56.690719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.734 [2024-11-17 14:32:56.690725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.734 [2024-11-17 14:32:56.690738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87072 len:8 PRP1 0x0 PRP2 0x0 00:23:18.734 [2024-11-17 14:32:56.690746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.734 [2024-11-17 14:32:56.690754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.734 [2024-11-17 14:32:56.690760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.734 [2024-11-17 14:32:56.690766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87080 len:8 PRP1 0x0 PRP2 0x0 00:23:18.734 [2024-11-17 14:32:56.690772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.734 [2024-11-17 14:32:56.690779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.734 [2024-11-17 14:32:56.690785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.734 [2024-11-17 14:32:56.690790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87088 len:8 PRP1 0x0 PRP2 0x0 00:23:18.734 [2024-11-17 14:32:56.690797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.734 [2024-11-17 14:32:56.690803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.734 [2024-11-17 14:32:56.690808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.734 [2024-11-17 14:32:56.690814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87096 len:8 PRP1 0x0 PRP2 0x0 00:23:18.734 [2024-11-17 14:32:56.690821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.734 [2024-11-17 14:32:56.690827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.734 [2024-11-17 14:32:56.690833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.734 [2024-11-17 14:32:56.690838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87104 len:8 PRP1 0x0 PRP2 0x0 00:23:18.734 [2024-11-17 14:32:56.690845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.734 [2024-11-17 14:32:56.690852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.734 [2024-11-17 14:32:56.690858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.734 [2024-11-17 14:32:56.690864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87112 len:8 PRP1 0x0 PRP2 0x0 00:23:18.734 [2024-11-17 14:32:56.690870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.734 [2024-11-17 14:32:56.690878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.734 [2024-11-17 14:32:56.690883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.734 [2024-11-17 14:32:56.690889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87120 len:8 PRP1 0x0 PRP2 0x0 00:23:18.734 [2024-11-17 14:32:56.690895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.734 [2024-11-17 14:32:56.690903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.734 [2024-11-17 14:32:56.690908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.734 [2024-11-17 14:32:56.690914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87128 len:8 PRP1 0x0 PRP2 0x0 00:23:18.734 [2024-11-17 14:32:56.690922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.734 [2024-11-17 14:32:56.690929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.734 [2024-11-17 14:32:56.690935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.734 [2024-11-17 14:32:56.690941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87136 len:8 PRP1 0x0 PRP2 0x0 00:23:18.734 [2024-11-17 14:32:56.690948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.734 [2024-11-17 14:32:56.690956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.734 [2024-11-17 14:32:56.690961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.734 [2024-11-17 14:32:56.690968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87144 len:8 PRP1 0x0 PRP2 0x0 00:23:18.734 [2024-11-17 14:32:56.690974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.734 [2024-11-17 14:32:56.691017] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:18.734 [2024-11-17 14:32:56.691040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.734 [2024-11-17 14:32:56.691048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.734 [2024-11-17 14:32:56.691056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.734 [2024-11-17 14:32:56.691063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.734 [2024-11-17 14:32:56.691070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.734 [2024-11-17 14:32:56.691077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.734 [2024-11-17 14:32:56.691084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.734 [2024-11-17 14:32:56.691091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.734 [2024-11-17 14:32:56.691098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:18.734 [2024-11-17 14:32:56.691129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac5340 (9): Bad file descriptor 00:23:18.734 [2024-11-17 14:32:56.693940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:18.734 [2024-11-17 14:32:56.796454] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:18.734 10470.20 IOPS, 40.90 MiB/s [2024-11-17T13:33:07.959Z] 10608.33 IOPS, 41.44 MiB/s [2024-11-17T13:33:07.959Z] 10681.29 IOPS, 41.72 MiB/s [2024-11-17T13:33:07.959Z] 10722.88 IOPS, 41.89 MiB/s [2024-11-17T13:33:07.959Z] 10758.89 IOPS, 42.03 MiB/s [2024-11-17T13:33:07.959Z] [2024-11-17 14:33:01.142563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:118424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.734 [2024-11-17 14:33:01.142598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.734 [2024-11-17 14:33:01.142613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:118432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.734 [2024-11-17 14:33:01.142621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.734 [2024-11-17 14:33:01.142630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.734 [2024-11-17 14:33:01.142638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.734 [2024-11-17 14:33:01.142646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.734 [2024-11-17 14:33:01.142658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:118496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:118552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:118600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.142987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:118624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.142994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.143002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.143009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.143016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:118640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.143023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.143033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.143041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.143050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.143056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.143064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.143070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.143078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.143086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.143094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.143101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.143110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:118688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.143117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.143125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.143132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.143140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.735 [2024-11-17 14:33:01.143148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.735 [2024-11-17 14:33:01.143156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.736 [2024-11-17 14:33:01.143162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.736 [2024-11-17 14:33:01.143177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.736 [2024-11-17 14:33:01.143192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:118792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:118816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:118840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:118864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:118880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:118904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:118920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:118936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:118944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:118960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:118968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:118976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:119008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:119024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.736 [2024-11-17 14:33:01.143663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.736 [2024-11-17 14:33:01.143671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.143678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.143687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.143693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.143702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:119048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.143710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.143719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:119056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.143725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.143734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:119064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.143741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.143751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:119072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.143759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.143768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.143776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.143785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:119088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.143793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.143802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:119096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.143809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.143816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:119104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.143825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.143833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:119112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.143839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.143848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:119120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.143856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.143864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.143870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.143878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.143885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.143893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:119144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.143900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.143908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.143914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.143922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:119160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.143929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.143937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.143944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.143952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:119176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.143959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.143967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:119184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.143974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.143982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:119192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.143988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.143996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:119200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.144004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.144013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.144020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.144028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:119216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.144035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.144043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.144049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.144057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.144064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.144074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:119240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.144081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.144090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:119248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.144097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.144105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:119256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.144112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.144120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.144127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.144136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:119272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.144143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.144151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.144158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.144166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:119288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.737 [2024-11-17 14:33:01.144172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.737 [2024-11-17 14:33:01.144180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.738 [2024-11-17 14:33:01.144188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.738 [2024-11-17 14:33:01.144196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:119304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.738 [2024-11-17 14:33:01.144204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.738 [2024-11-17 14:33:01.144240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.738 [2024-11-17 14:33:01.144248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119312 len:8 PRP1 0x0 PRP2 0x0 00:23:18.738 [2024-11-17 14:33:01.144255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.738 [2024-11-17 14:33:01.144265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.738 [2024-11-17 14:33:01.144272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.738 [2024-11-17 14:33:01.144278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119320 len:8 PRP1 0x0 PRP2 0x0 00:23:18.738 [2024-11-17 14:33:01.144284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.738 [2024-11-17 14:33:01.144292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.738 [2024-11-17 14:33:01.144297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.738 [2024-11-17 14:33:01.144304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119328 len:8 PRP1 0x0 PRP2 0x0 00:23:18.738 [2024-11-17 14:33:01.144310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.738 [2024-11-17 14:33:01.144317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.738 [2024-11-17 14:33:01.144323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.738 [2024-11-17 14:33:01.144328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119336 len:8 PRP1 0x0 PRP2 0x0 00:23:18.738 [2024-11-17 14:33:01.144335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.738 [2024-11-17 14:33:01.144344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.738 [2024-11-17 14:33:01.144357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.738 [2024-11-17 14:33:01.144363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119344 len:8 PRP1 0x0 PRP2 0x0 00:23:18.738 [2024-11-17 14:33:01.144371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.738 [2024-11-17 14:33:01.144378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.738 [2024-11-17 14:33:01.144384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.738 [2024-11-17 14:33:01.144390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119352 len:8 PRP1 0x0 PRP2 0x0 00:23:18.738 [2024-11-17 14:33:01.144396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.738 [2024-11-17 14:33:01.144403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.738 [2024-11-17 14:33:01.144408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.738 [2024-11-17 14:33:01.144414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119360 len:8 PRP1 0x0 PRP2 0x0 00:23:18.738 [2024-11-17 14:33:01.144420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.738 [2024-11-17 14:33:01.144427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.738 [2024-11-17 14:33:01.144433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.738 [2024-11-17 14:33:01.144438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119368 len:8 PRP1 0x0 PRP2 0x0 00:23:18.738 [2024-11-17 14:33:01.144445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.738 [2024-11-17 14:33:01.144454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.738 [2024-11-17 14:33:01.144459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.738 [2024-11-17 14:33:01.144464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119376 len:8 PRP1 0x0 PRP2 0x0 00:23:18.738 [2024-11-17 14:33:01.144471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.738 [2024-11-17 14:33:01.144478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.738 [2024-11-17 14:33:01.144484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.738 [2024-11-17 14:33:01.144490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119384 len:8 PRP1 0x0 PRP2 0x0 00:23:18.738 [2024-11-17 14:33:01.144496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.738 [2024-11-17 14:33:01.144503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.738 [2024-11-17 14:33:01.144508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.738 [2024-11-17 14:33:01.144514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119392 len:8 PRP1 0x0 PRP2 0x0 00:23:18.738 [2024-11-17 14:33:01.144520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.738 [2024-11-17 14:33:01.144526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.738 [2024-11-17 14:33:01.144532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.738 [2024-11-17 14:33:01.144538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119400 len:8 PRP1 0x0 PRP2 0x0 00:23:18.738 [2024-11-17 14:33:01.144545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.738 [2024-11-17 14:33:01.144554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.738 [2024-11-17 14:33:01.144559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.738 [2024-11-17 14:33:01.144564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119408 len:8 PRP1 0x0 PRP2 0x0 00:23:18.738 [2024-11-17 14:33:01.144571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.738 [2024-11-17 14:33:01.144578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.738 [2024-11-17 14:33:01.144583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.738 [2024-11-17 14:33:01.144589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119416 len:8 PRP1 0x0 PRP2 0x0 00:23:18.738 [2024-11-17 14:33:01.144596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.738 [2024-11-17 14:33:01.144602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.738 [2024-11-17 14:33:01.144608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.739 [2024-11-17 14:33:01.144614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119424 len:8 PRP1 0x0 PRP2 0x0 00:23:18.739 [2024-11-17 14:33:01.144621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.739 [2024-11-17 14:33:01.144628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.739 [2024-11-17 14:33:01.144633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.739 [2024-11-17 14:33:01.144640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119432 len:8 PRP1 0x0 PRP2 0x0 00:23:18.739 [2024-11-17 14:33:01.144649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.739 [2024-11-17 14:33:01.144656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.739 [2024-11-17 14:33:01.144661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.739 [2024-11-17 14:33:01.144667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119440 len:8 PRP1 0x0 PRP2 0x0 00:23:18.739 [2024-11-17 14:33:01.144673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.739 [2024-11-17 14:33:01.144680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.739 [2024-11-17 14:33:01.144685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.739 [2024-11-17 14:33:01.144691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118736 len:8 PRP1 0x0 PRP2 0x0 00:23:18.739 [2024-11-17 14:33:01.144698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.739 [2024-11-17 14:33:01.144707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.739 [2024-11-17 14:33:01.144712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.739 [2024-11-17 14:33:01.144718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118744 len:8 PRP1 0x0 PRP2 0x0 00:23:18.739 [2024-11-17 14:33:01.144724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.739 [2024-11-17 14:33:01.144731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.739 [2024-11-17 14:33:01.144736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.739 [2024-11-17 14:33:01.144741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118752 len:8 PRP1 0x0 PRP2 0x0 00:23:18.739 [2024-11-17 14:33:01.144748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.739 [2024-11-17 14:33:01.144756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.739 [2024-11-17 14:33:01.144761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.739 [2024-11-17 14:33:01.144767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118760 len:8 PRP1 0x0 PRP2 0x0 00:23:18.739 [2024-11-17 14:33:01.144773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.739 [2024-11-17 14:33:01.144780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.739 [2024-11-17 14:33:01.144785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.739 [2024-11-17 14:33:01.144791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118768 len:8 PRP1 0x0 PRP2 0x0 00:23:18.739 [2024-11-17 14:33:01.144797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.739 [2024-11-17 14:33:01.154883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.739 [2024-11-17 14:33:01.154894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.739 [2024-11-17 14:33:01.154902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118776 len:8 PRP1 0x0 PRP2 0x0 00:23:18.739 [2024-11-17 14:33:01.154910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.739 [2024-11-17 14:33:01.154917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.739 [2024-11-17 14:33:01.154925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.739 [2024-11-17 14:33:01.154930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118784 len:8 PRP1 0x0 PRP2 0x0 00:23:18.739 [2024-11-17 14:33:01.154936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.739 [2024-11-17 14:33:01.154983] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:18.739 [2024-11-17 14:33:01.155008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.739 [2024-11-17 14:33:01.155016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.739 [2024-11-17 14:33:01.155024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.739 [2024-11-17 14:33:01.155030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.739 [2024-11-17 14:33:01.155038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.739 [2024-11-17 14:33:01.155045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.739 [2024-11-17 14:33:01.155054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.739 [2024-11-17 14:33:01.155063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.739 [2024-11-17 14:33:01.155071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:18.739 [2024-11-17 14:33:01.155102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac5340 (9): Bad file descriptor 00:23:18.739 [2024-11-17 14:33:01.158443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:18.739 [2024-11-17 14:33:01.180714] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:18.739 10759.20 IOPS, 42.03 MiB/s [2024-11-17T13:33:07.964Z] 10803.64 IOPS, 42.20 MiB/s [2024-11-17T13:33:07.964Z] 10818.67 IOPS, 42.26 MiB/s [2024-11-17T13:33:07.964Z] 10852.15 IOPS, 42.39 MiB/s [2024-11-17T13:33:07.964Z] 10868.79 IOPS, 42.46 MiB/s 00:23:18.739 Latency(us) 00:23:18.739 [2024-11-17T13:33:07.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.739 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:18.739 Verification LBA range: start 0x0 length 0x4000 00:23:18.739 NVMe0n1 : 15.00 10891.07 42.54 991.56 0.00 10749.80 623.30 20857.54 00:23:18.739 [2024-11-17T13:33:07.964Z] =================================================================================================================== 00:23:18.739 [2024-11-17T13:33:07.964Z] Total : 10891.07 42.54 991.56 0.00 10749.80 623.30 20857.54 00:23:18.739 Received shutdown signal, test time was about 15.000000 seconds 00:23:18.739 00:23:18.739 Latency(us) 00:23:18.739 [2024-11-17T13:33:07.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.739 [2024-11-17T13:33:07.964Z] =================================================================================================================== 00:23:18.739 [2024-11-17T13:33:07.964Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.739 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:18.739 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:18.739 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:18.739 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1562240 00:23:18.739 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:18.739 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1562240 /var/tmp/bdevperf.sock 00:23:18.739 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1562240 ']' 00:23:18.739 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.739 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:18.739 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.739 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:18.739 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:18.739 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.739 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:18.739 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:18.739 [2024-11-17 14:33:07.604676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:18.739 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:18.739 [2024-11-17 14:33:07.805272] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:18.740 14:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:18.997 NVMe0n1 00:23:18.997 14:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:19.255 00:23:19.255 14:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:19.822 00:23:19.822 14:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:19.822 14:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:19.822 14:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:20.080 14:33:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:23.362 14:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:23.362 14:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:23.362 14:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:23.362 14:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1562973 00:23:23.362 14:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1562973 00:23:24.297 { 00:23:24.297 "results": [ 00:23:24.297 { 00:23:24.297 "job": "NVMe0n1", 00:23:24.297 "core_mask": "0x1", 00:23:24.297 "workload": "verify", 00:23:24.297 "status": "finished", 00:23:24.297 "verify_range": { 00:23:24.297 "start": 0, 00:23:24.297 "length": 16384 00:23:24.297 }, 00:23:24.297 "queue_depth": 128, 00:23:24.297 "io_size": 4096, 00:23:24.297 "runtime": 1.013977, 00:23:24.297 "iops": 10953.89737637047, 00:23:24.297 "mibps": 42.78866162644715, 00:23:24.297 "io_failed": 0, 00:23:24.297 "io_timeout": 0, 00:23:24.297 "avg_latency_us": 11642.797706734102, 00:23:24.297 "min_latency_us": 2521.711304347826, 00:23:24.297 "max_latency_us": 14019.005217391305 00:23:24.297 } 00:23:24.297 ], 00:23:24.297 "core_count": 1 00:23:24.297 } 00:23:24.297 14:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:24.297 [2024-11-17 14:33:07.221133] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:23:24.297 [2024-11-17 14:33:07.221186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1562240 ] 00:23:24.297 [2024-11-17 14:33:07.298142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.297 [2024-11-17 14:33:07.336136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.297 [2024-11-17 14:33:09.138851] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:24.297 [2024-11-17 14:33:09.138899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.297 [2024-11-17 14:33:09.138912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.297 [2024-11-17 14:33:09.138921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.297 [2024-11-17 14:33:09.138929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.297 [2024-11-17 14:33:09.138936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.297 [2024-11-17 14:33:09.138943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.297 [2024-11-17 14:33:09.138950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.297 [2024-11-17 14:33:09.138958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.297 [2024-11-17 14:33:09.138969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:24.297 [2024-11-17 14:33:09.138994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:24.297 [2024-11-17 14:33:09.139008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x742340 (9): Bad file descriptor 00:23:24.297 [2024-11-17 14:33:09.149640] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:24.297 Running I/O for 1 seconds... 00:23:24.297 10870.00 IOPS, 42.46 MiB/s 00:23:24.297 Latency(us) 00:23:24.297 [2024-11-17T13:33:13.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.297 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:24.297 Verification LBA range: start 0x0 length 0x4000 00:23:24.297 NVMe0n1 : 1.01 10953.90 42.79 0.00 0.00 11642.80 2521.71 14019.01 00:23:24.297 [2024-11-17T13:33:13.522Z] =================================================================================================================== 00:23:24.297 [2024-11-17T13:33:13.522Z] Total : 10953.90 42.79 0.00 0.00 11642.80 2521.71 14019.01 00:23:24.297 14:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:24.297 14:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:24.556 14:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:24.814 14:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:24.814 14:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:25.072 14:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:25.330 14:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:28.613 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:28.613 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:28.613 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1562240 00:23:28.613 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1562240 ']' 00:23:28.613 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1562240 00:23:28.613 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:28.613 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.613 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1562240 00:23:28.613 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:28.613 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:28.613 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1562240' 00:23:28.613 killing process with pid 1562240 00:23:28.613 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1562240 00:23:28.613 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1562240 00:23:28.613 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:28.613 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.872 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:28.872 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:28.872 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:28.872 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:28.872 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:28.872 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:28.872 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:28.872 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:28.872 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:28.872 rmmod nvme_tcp 00:23:28.872 rmmod nvme_fabrics 00:23:28.872 rmmod nvme_keyring 00:23:28.872 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:28.872 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:28.872 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:28.872 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1558739 ']' 00:23:28.872 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1558739 00:23:28.872 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1558739 ']' 00:23:28.873 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1558739 00:23:28.873 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:28.873 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.873 14:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1558739 00:23:28.873 14:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:28.873 14:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:28.873 14:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1558739' 00:23:28.873 killing process with pid 1558739 00:23:28.873 14:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1558739 00:23:28.873 14:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1558739 00:23:29.132 14:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:29.132 14:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:29.132 14:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:29.132 14:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:29.132 14:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:29.132 14:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:29.132 14:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:29.132 14:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:29.132 14:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:29.132 14:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.132 14:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.132 14:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.670 14:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:31.670 00:23:31.670 real 0m37.404s 00:23:31.670 user 1m58.396s 00:23:31.670 sys 0m8.005s 00:23:31.670 14:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:31.670 14:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:31.670 ************************************ 00:23:31.670 END TEST nvmf_failover 00:23:31.670 ************************************ 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.671 ************************************ 00:23:31.671 START TEST nvmf_host_discovery 00:23:31.671 ************************************ 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:31.671 * Looking for test storage... 00:23:31.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:31.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.671 --rc genhtml_branch_coverage=1 00:23:31.671 --rc genhtml_function_coverage=1 00:23:31.671 --rc genhtml_legend=1 00:23:31.671 --rc geninfo_all_blocks=1 00:23:31.671 --rc geninfo_unexecuted_blocks=1 00:23:31.671 00:23:31.671 ' 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:31.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.671 --rc genhtml_branch_coverage=1 00:23:31.671 --rc genhtml_function_coverage=1 00:23:31.671 --rc genhtml_legend=1 00:23:31.671 --rc geninfo_all_blocks=1 00:23:31.671 --rc geninfo_unexecuted_blocks=1 00:23:31.671 00:23:31.671 ' 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:31.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.671 --rc genhtml_branch_coverage=1 00:23:31.671 --rc genhtml_function_coverage=1 00:23:31.671 --rc genhtml_legend=1 00:23:31.671 --rc geninfo_all_blocks=1 00:23:31.671 --rc geninfo_unexecuted_blocks=1 00:23:31.671 00:23:31.671 ' 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:31.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.671 --rc genhtml_branch_coverage=1 00:23:31.671 --rc genhtml_function_coverage=1 00:23:31.671 --rc genhtml_legend=1 00:23:31.671 --rc geninfo_all_blocks=1 00:23:31.671 --rc geninfo_unexecuted_blocks=1 00:23:31.671 00:23:31.671 ' 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.671 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:31.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:31.672 14:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:38.244 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:38.244 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:38.244 Found net devices under 0000:86:00.0: cvl_0_0 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:38.244 Found net devices under 0000:86:00.1: cvl_0_1 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:38.244 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:38.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:23:38.245 00:23:38.245 --- 10.0.0.2 ping statistics --- 00:23:38.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.245 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:23:38.245 00:23:38.245 --- 10.0.0.1 ping statistics --- 00:23:38.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.245 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1567423 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1567423 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1567423 ']' 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.245 [2024-11-17 14:33:26.585206] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:23:38.245 [2024-11-17 14:33:26.585251] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.245 [2024-11-17 14:33:26.664469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.245 [2024-11-17 14:33:26.705607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.245 [2024-11-17 14:33:26.705643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.245 [2024-11-17 14:33:26.705650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.245 [2024-11-17 14:33:26.705657] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.245 [2024-11-17 14:33:26.705662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.245 [2024-11-17 14:33:26.706222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.245 [2024-11-17 14:33:26.841455] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.245 [2024-11-17 14:33:26.853618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.245 null0 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.245 null1 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1567451 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1567451 /tmp/host.sock 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1567451 ']' 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:38.245 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.245 14:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.245 [2024-11-17 14:33:26.930870] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:23:38.245 [2024-11-17 14:33:26.930914] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1567451 ] 00:23:38.245 [2024-11-17 14:33:27.007826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.245 [2024-11-17 14:33:27.051003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.245 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.246 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.506 [2024-11-17 14:33:27.475195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:38.506 14:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:39.075 [2024-11-17 14:33:28.209506] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:39.075 [2024-11-17 14:33:28.209525] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:39.075 [2024-11-17 14:33:28.209538] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:39.075 [2024-11-17 14:33:28.295796] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:39.334 [2024-11-17 14:33:28.471747] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:39.334 [2024-11-17 14:33:28.472474] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x11d3dd0:1 started. 00:23:39.334 [2024-11-17 14:33:28.473903] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:39.334 [2024-11-17 14:33:28.473918] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:39.334 [2024-11-17 14:33:28.518977] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x11d3dd0 was disconnected and freed. delete nvme_qpair. 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.594 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.863 [2024-11-17 14:33:28.874315] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x11d41a0:1 started. 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.863 [2024-11-17 14:33:28.919869] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x11d41a0 was disconnected and freed. delete nvme_qpair. 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.863 [2024-11-17 14:33:28.971295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:39.863 [2024-11-17 14:33:28.972101] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:39.863 [2024-11-17 14:33:28.972119] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:39.863 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:39.864 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:39.864 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:39.864 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:39.864 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.864 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.864 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:39.864 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.864 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:39.864 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:39.864 14:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:39.864 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:40.222 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.222 [2024-11-17 14:33:29.101523] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:40.222 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:40.222 14:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:40.222 [2024-11-17 14:33:29.206158] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:40.222 [2024-11-17 14:33:29.206191] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:40.222 [2024-11-17 14:33:29.206199] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:40.222 [2024-11-17 14:33:29.206203] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.250 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.250 [2024-11-17 14:33:30.231235] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:41.251 [2024-11-17 14:33:30.231259] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:41.251 [2024-11-17 14:33:30.236208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.251 [2024-11-17 14:33:30.236229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.251 [2024-11-17 14:33:30.236238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.251 [2024-11-17 14:33:30.236246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.251 [2024-11-17 14:33:30.236254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.251 [2024-11-17 14:33:30.236261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.251 [2024-11-17 14:33:30.236270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.251 [2024-11-17 14:33:30.236277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.251 [2024-11-17 14:33:30.236285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4390 is same with the state(6) to be set 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:41.251 [2024-11-17 14:33:30.246221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a4390 (9): Bad file descriptor 00:23:41.251 [2024-11-17 14:33:30.256258] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:41.251 [2024-11-17 14:33:30.256272] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:41.251 [2024-11-17 14:33:30.256276] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:41.251 [2024-11-17 14:33:30.256281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:41.251 [2024-11-17 14:33:30.256300] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:41.251 [2024-11-17 14:33:30.256555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.251 [2024-11-17 14:33:30.256571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a4390 with addr=10.0.0.2, port=4420 00:23:41.251 [2024-11-17 14:33:30.256581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4390 is same with the state(6) to be set 00:23:41.251 [2024-11-17 14:33:30.256595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a4390 (9): Bad file descriptor 00:23:41.251 [2024-11-17 14:33:30.256614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:41.251 [2024-11-17 14:33:30.256622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:41.251 [2024-11-17 14:33:30.256631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:41.251 [2024-11-17 14:33:30.256637] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:41.251 [2024-11-17 14:33:30.256642] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:41.251 [2024-11-17 14:33:30.256646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.251 [2024-11-17 14:33:30.266330] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:41.251 [2024-11-17 14:33:30.266341] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:41.251 [2024-11-17 14:33:30.266345] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:41.251 [2024-11-17 14:33:30.266349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:41.251 [2024-11-17 14:33:30.266369] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:41.251 [2024-11-17 14:33:30.266469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.251 [2024-11-17 14:33:30.266481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a4390 with addr=10.0.0.2, port=4420 00:23:41.251 [2024-11-17 14:33:30.266489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4390 is same with the state(6) to be set 00:23:41.251 [2024-11-17 14:33:30.266500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a4390 (9): Bad file descriptor 00:23:41.251 [2024-11-17 14:33:30.266510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:41.251 [2024-11-17 14:33:30.266516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:41.251 [2024-11-17 14:33:30.266523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:41.251 [2024-11-17 14:33:30.266529] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:41.251 [2024-11-17 14:33:30.266533] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:41.251 [2024-11-17 14:33:30.266537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:41.251 [2024-11-17 14:33:30.276400] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:41.251 [2024-11-17 14:33:30.276413] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:41.251 [2024-11-17 14:33:30.276417] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:41.251 [2024-11-17 14:33:30.276421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:41.251 [2024-11-17 14:33:30.276436] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:41.251 [2024-11-17 14:33:30.276626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.251 [2024-11-17 14:33:30.276641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a4390 with addr=10.0.0.2, port=4420 00:23:41.251 [2024-11-17 14:33:30.276648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4390 is same with the state(6) to be set 00:23:41.251 [2024-11-17 14:33:30.276660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a4390 (9): Bad file descriptor 00:23:41.251 [2024-11-17 14:33:30.276671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:41.251 [2024-11-17 14:33:30.276677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:41.251 [2024-11-17 14:33:30.276684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:41.251 [2024-11-17 14:33:30.276690] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:41.251 [2024-11-17 14:33:30.276695] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:41.251 [2024-11-17 14:33:30.276699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:41.251 [2024-11-17 14:33:30.286466] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:41.251 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:41.251 [2024-11-17 14:33:30.286479] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:41.251 [2024-11-17 14:33:30.286486] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:41.251 [2024-11-17 14:33:30.286491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:41.251 [2024-11-17 14:33:30.286505] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:41.251 [2024-11-17 14:33:30.286699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.251 [2024-11-17 14:33:30.286713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a4390 with addr=10.0.0.2, port=4420 00:23:41.251 [2024-11-17 14:33:30.286721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4390 is same with the state(6) to be set 00:23:41.251 [2024-11-17 14:33:30.286732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a4390 (9): Bad file descriptor 00:23:41.251 [2024-11-17 14:33:30.286750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:41.251 [2024-11-17 14:33:30.286758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:41.251 [2024-11-17 14:33:30.286766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:41.251 [2024-11-17 14:33:30.286771] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:41.251 [2024-11-17 14:33:30.286776] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:41.252 [2024-11-17 14:33:30.286780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.252 [2024-11-17 14:33:30.296536] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:41.252 [2024-11-17 14:33:30.296550] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:41.252 [2024-11-17 14:33:30.296554] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:41.252 [2024-11-17 14:33:30.296559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:41.252 [2024-11-17 14:33:30.296574] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:41.252 [2024-11-17 14:33:30.296752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.252 [2024-11-17 14:33:30.296766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a4390 with addr=10.0.0.2, port=4420 00:23:41.252 [2024-11-17 14:33:30.296780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4390 is same with the state(6) to be set 00:23:41.252 [2024-11-17 14:33:30.296792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a4390 (9): Bad file descriptor 00:23:41.252 [2024-11-17 14:33:30.296802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:41.252 [2024-11-17 14:33:30.296809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:41.252 [2024-11-17 14:33:30.296817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:41.252 [2024-11-17 14:33:30.296823] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:41.252 [2024-11-17 14:33:30.296828] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:41.252 [2024-11-17 14:33:30.296832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:41.252 [2024-11-17 14:33:30.306605] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:41.252 [2024-11-17 14:33:30.306616] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:41.252 [2024-11-17 14:33:30.306620] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:41.252 [2024-11-17 14:33:30.306624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:41.252 [2024-11-17 14:33:30.306639] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:41.252 [2024-11-17 14:33:30.306885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.252 [2024-11-17 14:33:30.306898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a4390 with addr=10.0.0.2, port=4420 00:23:41.252 [2024-11-17 14:33:30.306905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4390 is same with the state(6) to be set 00:23:41.252 [2024-11-17 14:33:30.306917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a4390 (9): Bad file descriptor 00:23:41.252 [2024-11-17 14:33:30.306935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:41.252 [2024-11-17 14:33:30.306943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:41.252 [2024-11-17 14:33:30.306950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:41.252 [2024-11-17 14:33:30.306956] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:41.252 [2024-11-17 14:33:30.306961] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:41.252 [2024-11-17 14:33:30.306965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:41.252 [2024-11-17 14:33:30.316670] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:41.252 [2024-11-17 14:33:30.316680] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:41.252 [2024-11-17 14:33:30.316684] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:41.252 [2024-11-17 14:33:30.316689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:41.252 [2024-11-17 14:33:30.316702] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:41.252 [2024-11-17 14:33:30.316943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.252 [2024-11-17 14:33:30.316959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a4390 with addr=10.0.0.2, port=4420 00:23:41.252 [2024-11-17 14:33:30.316967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4390 is same with the state(6) to be set 00:23:41.252 [2024-11-17 14:33:30.316978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a4390 (9): Bad file descriptor 00:23:41.252 [2024-11-17 14:33:30.317007] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:41.252 [2024-11-17 14:33:30.317021] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:41.252 [2024-11-17 14:33:30.317038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:41.252 [2024-11-17 14:33:30.317047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:41.252 [2024-11-17 14:33:30.317055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:41.252 [2024-11-17 14:33:30.317061] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:41.252 [2024-11-17 14:33:30.317065] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:41.252 [2024-11-17 14:33:30.317069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:41.252 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:41.253 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.253 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.253 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:41.253 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:41.253 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:41.253 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:41.253 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.253 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:41.253 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.253 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:41.253 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.512 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.513 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.513 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:41.513 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:41.513 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:41.513 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.513 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:41.513 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.513 14:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.450 [2024-11-17 14:33:31.618825] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:42.450 [2024-11-17 14:33:31.618841] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:42.450 [2024-11-17 14:33:31.618852] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:42.710 [2024-11-17 14:33:31.706120] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:42.710 [2024-11-17 14:33:31.811834] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:42.710 [2024-11-17 14:33:31.812471] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x11d2a30:1 started. 00:23:42.710 [2024-11-17 14:33:31.814087] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:42.710 [2024-11-17 14:33:31.814112] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:42.710 [2024-11-17 14:33:31.817249] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x11d2a30 was disconnected and freed. delete nvme_qpair. 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.710 request: 00:23:42.710 { 00:23:42.710 "name": "nvme", 00:23:42.710 "trtype": "tcp", 00:23:42.710 "traddr": "10.0.0.2", 00:23:42.710 "adrfam": "ipv4", 00:23:42.710 "trsvcid": "8009", 00:23:42.710 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:42.710 "wait_for_attach": true, 00:23:42.710 "method": "bdev_nvme_start_discovery", 00:23:42.710 "req_id": 1 00:23:42.710 } 00:23:42.710 Got JSON-RPC error response 00:23:42.710 response: 00:23:42.710 { 00:23:42.710 "code": -17, 00:23:42.710 "message": "File exists" 00:23:42.710 } 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:42.710 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.969 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.970 request: 00:23:42.970 { 00:23:42.970 "name": "nvme_second", 00:23:42.970 "trtype": "tcp", 00:23:42.970 "traddr": "10.0.0.2", 00:23:42.970 "adrfam": "ipv4", 00:23:42.970 "trsvcid": "8009", 00:23:42.970 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:42.970 "wait_for_attach": true, 00:23:42.970 "method": "bdev_nvme_start_discovery", 00:23:42.970 "req_id": 1 00:23:42.970 } 00:23:42.970 Got JSON-RPC error response 00:23:42.970 response: 00:23:42.970 { 00:23:42.970 "code": -17, 00:23:42.970 "message": "File exists" 00:23:42.970 } 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:42.970 14:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:42.970 14:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.970 14:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:42.970 14:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.970 14:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:42.970 14:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.970 14:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:42.970 14:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.970 14:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:42.970 14:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:42.970 14:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:42.970 14:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:42.970 14:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:42.970 14:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.970 14:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:42.970 14:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.970 14:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:42.970 14:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.970 14:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.907 [2024-11-17 14:33:33.057795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.907 [2024-11-17 14:33:33.057823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11bc2a0 with addr=10.0.0.2, port=8010 00:23:43.907 [2024-11-17 14:33:33.057839] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:43.907 [2024-11-17 14:33:33.057846] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:43.907 [2024-11-17 14:33:33.057853] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:44.845 [2024-11-17 14:33:34.060188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.846 [2024-11-17 14:33:34.060213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11bc2a0 with addr=10.0.0.2, port=8010 00:23:44.846 [2024-11-17 14:33:34.060225] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:44.846 [2024-11-17 14:33:34.060232] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:44.846 [2024-11-17 14:33:34.060239] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:46.224 [2024-11-17 14:33:35.062404] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:46.224 request: 00:23:46.224 { 00:23:46.224 "name": "nvme_second", 00:23:46.224 "trtype": "tcp", 00:23:46.224 "traddr": "10.0.0.2", 00:23:46.224 "adrfam": "ipv4", 00:23:46.224 "trsvcid": "8010", 00:23:46.224 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:46.224 "wait_for_attach": false, 00:23:46.224 "attach_timeout_ms": 3000, 00:23:46.224 "method": "bdev_nvme_start_discovery", 00:23:46.224 "req_id": 1 00:23:46.224 } 00:23:46.224 Got JSON-RPC error response 00:23:46.224 response: 00:23:46.224 { 00:23:46.224 "code": -110, 00:23:46.224 "message": "Connection timed out" 00:23:46.224 } 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1567451 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:46.224 rmmod nvme_tcp 00:23:46.224 rmmod nvme_fabrics 00:23:46.224 rmmod nvme_keyring 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:46.224 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1567423 ']' 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1567423 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1567423 ']' 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1567423 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1567423 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1567423' 00:23:46.225 killing process with pid 1567423 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1567423 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1567423 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.225 14:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.765 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:48.765 00:23:48.765 real 0m17.101s 00:23:48.765 user 0m20.299s 00:23:48.765 sys 0m5.793s 00:23:48.765 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:48.765 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.765 ************************************ 00:23:48.766 END TEST nvmf_host_discovery 00:23:48.766 ************************************ 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.766 ************************************ 00:23:48.766 START TEST nvmf_host_multipath_status 00:23:48.766 ************************************ 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:48.766 * Looking for test storage... 00:23:48.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:48.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.766 --rc genhtml_branch_coverage=1 00:23:48.766 --rc genhtml_function_coverage=1 00:23:48.766 --rc genhtml_legend=1 00:23:48.766 --rc geninfo_all_blocks=1 00:23:48.766 --rc geninfo_unexecuted_blocks=1 00:23:48.766 00:23:48.766 ' 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:48.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.766 --rc genhtml_branch_coverage=1 00:23:48.766 --rc genhtml_function_coverage=1 00:23:48.766 --rc genhtml_legend=1 00:23:48.766 --rc geninfo_all_blocks=1 00:23:48.766 --rc geninfo_unexecuted_blocks=1 00:23:48.766 00:23:48.766 ' 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:48.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.766 --rc genhtml_branch_coverage=1 00:23:48.766 --rc genhtml_function_coverage=1 00:23:48.766 --rc genhtml_legend=1 00:23:48.766 --rc geninfo_all_blocks=1 00:23:48.766 --rc geninfo_unexecuted_blocks=1 00:23:48.766 00:23:48.766 ' 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:48.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.766 --rc genhtml_branch_coverage=1 00:23:48.766 --rc genhtml_function_coverage=1 00:23:48.766 --rc genhtml_legend=1 00:23:48.766 --rc geninfo_all_blocks=1 00:23:48.766 --rc geninfo_unexecuted_blocks=1 00:23:48.766 00:23:48.766 ' 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.766 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:48.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:48.767 14:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:55.340 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:55.340 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:55.340 Found net devices under 0000:86:00.0: cvl_0_0 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:55.340 Found net devices under 0000:86:00.1: cvl_0_1 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:55.340 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:55.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:23:55.341 00:23:55.341 --- 10.0.0.2 ping statistics --- 00:23:55.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.341 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:23:55.341 00:23:55.341 --- 10.0.0.1 ping statistics --- 00:23:55.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.341 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1572521 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1572521 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1572521 ']' 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:55.341 [2024-11-17 14:33:43.756475] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:23:55.341 [2024-11-17 14:33:43.756528] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.341 [2024-11-17 14:33:43.835630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:55.341 [2024-11-17 14:33:43.879747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.341 [2024-11-17 14:33:43.879778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.341 [2024-11-17 14:33:43.879786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.341 [2024-11-17 14:33:43.879792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.341 [2024-11-17 14:33:43.879797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.341 [2024-11-17 14:33:43.880849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.341 [2024-11-17 14:33:43.880851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:55.341 14:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:55.341 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.341 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1572521 00:23:55.341 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:55.341 [2024-11-17 14:33:44.184466] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.341 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:55.341 Malloc0 00:23:55.341 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:55.601 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:55.860 14:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:55.860 [2024-11-17 14:33:45.025592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.860 14:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:56.119 [2024-11-17 14:33:45.214066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:56.119 14:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:56.119 14:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1572780 00:23:56.119 14:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:56.119 14:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1572780 /var/tmp/bdevperf.sock 00:23:56.119 14:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1572780 ']' 00:23:56.119 14:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:56.119 14:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.120 14:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:56.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:56.120 14:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.120 14:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:56.379 14:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.379 14:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:56.379 14:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:56.638 14:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:57.206 Nvme0n1 00:23:57.206 14:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:57.465 Nvme0n1 00:23:57.465 14:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:57.465 14:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:59.373 14:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:59.373 14:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:59.632 14:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:59.892 14:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:00.838 14:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:00.838 14:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:00.838 14:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.838 14:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:01.096 14:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.096 14:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:01.096 14:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.096 14:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:01.356 14:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:01.356 14:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:01.356 14:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.356 14:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:01.614 14:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.615 14:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:01.615 14:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.615 14:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:01.615 14:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.615 14:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:01.615 14:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.615 14:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:01.874 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.874 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:01.874 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.874 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:02.133 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.133 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:02.133 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:02.391 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:02.650 14:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:03.587 14:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:03.587 14:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:03.587 14:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.587 14:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:03.846 14:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:03.846 14:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:03.846 14:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.846 14:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:04.105 14:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.105 14:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:04.105 14:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.105 14:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:04.105 14:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.105 14:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:04.105 14:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.105 14:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:04.364 14:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.364 14:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:04.364 14:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.364 14:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:04.623 14:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.623 14:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:04.623 14:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.623 14:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:04.882 14:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.883 14:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:04.883 14:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:05.142 14:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:05.142 14:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:06.519 14:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:06.519 14:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:06.519 14:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.519 14:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:06.519 14:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.519 14:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:06.519 14:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.519 14:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:06.779 14:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.779 14:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:06.779 14:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.779 14:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:06.779 14:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.779 14:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:06.779 14:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.779 14:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:07.038 14:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.038 14:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:07.039 14:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:07.039 14:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.298 14:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.298 14:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:07.298 14:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.298 14:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:07.557 14:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.557 14:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:07.557 14:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:07.815 14:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:08.074 14:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:09.010 14:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:09.010 14:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:09.010 14:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.010 14:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:09.269 14:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.269 14:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:09.269 14:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.269 14:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:09.528 14:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:09.528 14:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:09.528 14:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.528 14:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:09.528 14:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.528 14:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:09.528 14:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:09.528 14:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.787 14:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.787 14:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:09.787 14:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.787 14:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:10.052 14:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.052 14:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:10.052 14:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.052 14:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:10.318 14:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:10.318 14:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:10.318 14:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:10.318 14:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:10.577 14:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:11.954 14:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:11.954 14:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:11.954 14:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.954 14:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:11.954 14:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.954 14:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:11.954 14:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.954 14:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:11.954 14:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.954 14:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:11.954 14:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.954 14:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:12.213 14:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.213 14:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:12.213 14:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.213 14:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:12.477 14:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.477 14:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:12.477 14:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.477 14:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:12.736 14:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:12.736 14:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:12.736 14:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.736 14:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:12.736 14:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:12.736 14:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:12.736 14:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:12.995 14:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:13.254 14:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:14.191 14:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:14.191 14:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:14.191 14:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.191 14:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:14.450 14:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:14.450 14:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:14.450 14:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:14.450 14:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.709 14:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.709 14:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:14.709 14:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.709 14:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:14.968 14:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.968 14:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:14.968 14:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.968 14:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:14.968 14:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.968 14:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:14.968 14:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.968 14:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:15.227 14:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:15.227 14:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:15.227 14:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.227 14:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:15.486 14:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.487 14:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:15.746 14:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:15.746 14:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:16.005 14:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:16.264 14:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:17.202 14:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:17.202 14:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:17.202 14:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.202 14:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:17.462 14:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.462 14:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:17.462 14:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.462 14:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:17.462 14:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.462 14:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:17.722 14:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.722 14:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:17.722 14:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.722 14:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:17.722 14:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:17.722 14:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.981 14:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.981 14:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:17.981 14:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.981 14:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:18.240 14:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.240 14:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:18.240 14:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.240 14:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:18.500 14:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.500 14:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:18.500 14:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:18.759 14:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:18.759 14:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:20.139 14:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:20.139 14:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:20.139 14:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.139 14:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:20.139 14:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:20.139 14:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:20.139 14:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.139 14:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:20.398 14:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.398 14:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:20.398 14:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.398 14:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:20.398 14:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.398 14:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:20.398 14:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.398 14:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:20.656 14:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.656 14:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:20.656 14:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:20.656 14:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.915 14:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.915 14:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:20.915 14:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.915 14:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:21.175 14:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.175 14:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:21.175 14:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:21.434 14:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:21.434 14:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:22.813 14:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:22.813 14:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:22.813 14:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.813 14:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:22.813 14:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.813 14:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:22.813 14:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.813 14:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:23.073 14:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.073 14:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:23.073 14:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.073 14:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:23.073 14:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.073 14:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:23.073 14:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:23.073 14:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.332 14:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.332 14:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:23.332 14:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.332 14:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:23.592 14:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.592 14:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:23.592 14:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.592 14:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:23.851 14:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.851 14:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:23.851 14:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:24.110 14:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:24.369 14:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:25.312 14:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:25.312 14:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:25.312 14:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.312 14:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:25.657 14:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.657 14:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:25.657 14:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.657 14:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:25.657 14:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:25.657 14:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:25.657 14:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.657 14:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:25.988 14:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.988 14:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:25.988 14:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.988 14:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:25.988 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.988 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:25.988 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.988 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:26.249 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.249 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:26.249 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.249 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:26.508 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:26.508 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1572780 00:24:26.508 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1572780 ']' 00:24:26.508 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1572780 00:24:26.508 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:26.508 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.508 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1572780 00:24:26.508 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:26.508 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:26.508 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1572780' 00:24:26.508 killing process with pid 1572780 00:24:26.508 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1572780 00:24:26.508 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1572780 00:24:26.508 { 00:24:26.508 "results": [ 00:24:26.508 { 00:24:26.508 "job": "Nvme0n1", 00:24:26.508 "core_mask": "0x4", 00:24:26.508 "workload": "verify", 00:24:26.508 "status": "terminated", 00:24:26.508 "verify_range": { 00:24:26.508 "start": 0, 00:24:26.508 "length": 16384 00:24:26.508 }, 00:24:26.508 "queue_depth": 128, 00:24:26.508 "io_size": 4096, 00:24:26.508 "runtime": 28.989679, 00:24:26.508 "iops": 10469.898614606944, 00:24:26.508 "mibps": 40.898041463308374, 00:24:26.508 "io_failed": 0, 00:24:26.508 "io_timeout": 0, 00:24:26.508 "avg_latency_us": 12205.142188837974, 00:24:26.508 "min_latency_us": 1196.744347826087, 00:24:26.508 "max_latency_us": 3078254.4139130437 00:24:26.508 } 00:24:26.508 ], 00:24:26.508 "core_count": 1 00:24:26.508 } 00:24:26.791 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1572780 00:24:26.791 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:26.791 [2024-11-17 14:33:45.272853] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:24:26.791 [2024-11-17 14:33:45.272901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1572780 ] 00:24:26.791 [2024-11-17 14:33:45.349262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.791 [2024-11-17 14:33:45.389922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.791 Running I/O for 90 seconds... 00:24:26.791 11255.00 IOPS, 43.96 MiB/s [2024-11-17T13:34:16.016Z] 11291.50 IOPS, 44.11 MiB/s [2024-11-17T13:34:16.016Z] 11351.00 IOPS, 44.34 MiB/s [2024-11-17T13:34:16.016Z] 11376.50 IOPS, 44.44 MiB/s [2024-11-17T13:34:16.016Z] 11407.40 IOPS, 44.56 MiB/s [2024-11-17T13:34:16.016Z] 11375.17 IOPS, 44.43 MiB/s [2024-11-17T13:34:16.016Z] 11379.57 IOPS, 44.45 MiB/s [2024-11-17T13:34:16.016Z] 11377.38 IOPS, 44.44 MiB/s [2024-11-17T13:34:16.016Z] 11361.44 IOPS, 44.38 MiB/s [2024-11-17T13:34:16.016Z] 11367.90 IOPS, 44.41 MiB/s [2024-11-17T13:34:16.016Z] 11351.91 IOPS, 44.34 MiB/s [2024-11-17T13:34:16.016Z] 11347.67 IOPS, 44.33 MiB/s [2024-11-17T13:34:16.016Z] [2024-11-17 14:33:59.510584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.791 [2024-11-17 14:33:59.510625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.791 [2024-11-17 14:33:59.510646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.791 [2024-11-17 14:33:59.510653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:26.791 [2024-11-17 14:33:59.510666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.791 [2024-11-17 14:33:59.510673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:26.791 [2024-11-17 14:33:59.510686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.791 [2024-11-17 14:33:59.510692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:26.791 [2024-11-17 14:33:59.510704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.791 [2024-11-17 14:33:59.510711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:26.791 [2024-11-17 14:33:59.510724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.791 [2024-11-17 14:33:59.510731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.791 [2024-11-17 14:33:59.510743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.791 [2024-11-17 14:33:59.510749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.791 [2024-11-17 14:33:59.510762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.791 [2024-11-17 14:33:59.510769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:26.791 [2024-11-17 14:33:59.510781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.791 [2024-11-17 14:33:59.510787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:26.791 [2024-11-17 14:33:59.510800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.791 [2024-11-17 14:33:59.510813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:26.791 [2024-11-17 14:33:59.510826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.791 [2024-11-17 14:33:59.510833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:26.791 [2024-11-17 14:33:59.510846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.791 [2024-11-17 14:33:59.510855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.510868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.510875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.510888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.510895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.510907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.510913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.510926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.510934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.510947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.510956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.792 [2024-11-17 14:33:59.511748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:26.792 [2024-11-17 14:33:59.511760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.511767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.511779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.511786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.511798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.511805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.511817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.511824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.511836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.511842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.511854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.511861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.511873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.511880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.511891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.511898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.511910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.511917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.511930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.511938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.511951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.511957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.511969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.511976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.511988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.511995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.793 [2024-11-17 14:33:59.512014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.793 [2024-11-17 14:33:59.512033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.793 [2024-11-17 14:33:59.512052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.793 [2024-11-17 14:33:59.512071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.793 [2024-11-17 14:33:59.512091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.793 [2024-11-17 14:33:59.512111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.793 [2024-11-17 14:33:59.512129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.512148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.512639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.512661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.512680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.512700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.512719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.512739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.512758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.512776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.512796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.512815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.512834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.512854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.512875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.512895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.512915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.512935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.512955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:26.793 [2024-11-17 14:33:59.512968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.793 [2024-11-17 14:33:59.512976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.512988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.794 [2024-11-17 14:33:59.512995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.794 [2024-11-17 14:33:59.513014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.794 [2024-11-17 14:33:59.513035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.794 [2024-11-17 14:33:59.513053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.794 [2024-11-17 14:33:59.513074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.794 [2024-11-17 14:33:59.513094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:26.794 [2024-11-17 14:33:59.513749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.794 [2024-11-17 14:33:59.513755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.514982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.514994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.515001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.515013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.515020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.515033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.515040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.515052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.515060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.515073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.515080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.515092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.515100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:26.795 [2024-11-17 14:33:59.515112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.795 [2024-11-17 14:33:59.515119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.515132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.515140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.515152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.515159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.515171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.515178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.515190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.515197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.515211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.515218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.515230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.515236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.515248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.515255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.515268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.515275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.515287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.526265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.526297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.526323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.526350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.526383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.526410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.526438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.526465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.526498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.526526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.526552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.526579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.526606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.526633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.526661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.526687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.526715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.526741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.796 [2024-11-17 14:33:59.526768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.796 [2024-11-17 14:33:59.526795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.796 [2024-11-17 14:33:59.526824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.796 [2024-11-17 14:33:59.526852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.796 [2024-11-17 14:33:59.526878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.796 [2024-11-17 14:33:59.526905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.526922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.796 [2024-11-17 14:33:59.526933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.527591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.796 [2024-11-17 14:33:59.527611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:26.796 [2024-11-17 14:33:59.527631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.527642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.527659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.527669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.527687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.527696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.527713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.527723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.527740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.527750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.527767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.527776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.527794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.527803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.527824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.527834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.527851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.527860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.527877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.527887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.527905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.527915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.527932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.527942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.527959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.527968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.527985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.527995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.528021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.528048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.528075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.528101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.528128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.528157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.528183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.528209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.528236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.797 [2024-11-17 14:33:59.528263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.797 [2024-11-17 14:33:59.528290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.797 [2024-11-17 14:33:59.528316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.797 [2024-11-17 14:33:59.528345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.797 [2024-11-17 14:33:59.528378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.797 [2024-11-17 14:33:59.528406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.797 [2024-11-17 14:33:59.528434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.797 [2024-11-17 14:33:59.528461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.797 [2024-11-17 14:33:59.528491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.797 [2024-11-17 14:33:59.528517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.797 [2024-11-17 14:33:59.528544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.797 [2024-11-17 14:33:59.528572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.797 [2024-11-17 14:33:59.528598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.797 [2024-11-17 14:33:59.528624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.797 [2024-11-17 14:33:59.528651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:26.797 [2024-11-17 14:33:59.528668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.797 [2024-11-17 14:33:59.528679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.528696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.798 [2024-11-17 14:33:59.528705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.528722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.798 [2024-11-17 14:33:59.528732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.528749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.798 [2024-11-17 14:33:59.528758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.528775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.798 [2024-11-17 14:33:59.528787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.528804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.798 [2024-11-17 14:33:59.528815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.528833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.798 [2024-11-17 14:33:59.528843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.528860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.798 [2024-11-17 14:33:59.528870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.528887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.798 [2024-11-17 14:33:59.528897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.528914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.798 [2024-11-17 14:33:59.528924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.528941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.798 [2024-11-17 14:33:59.528950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.528968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.798 [2024-11-17 14:33:59.528977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.528994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.798 [2024-11-17 14:33:59.529004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.798 [2024-11-17 14:33:59.529030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.798 [2024-11-17 14:33:59.529057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.798 [2024-11-17 14:33:59.529084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.798 [2024-11-17 14:33:59.529110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.798 [2024-11-17 14:33:59.529139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.798 [2024-11-17 14:33:59.529165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.798 [2024-11-17 14:33:59.529192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.798 [2024-11-17 14:33:59.529220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.798 [2024-11-17 14:33:59.529248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.798 [2024-11-17 14:33:59.529274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.798 [2024-11-17 14:33:59.529301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.798 [2024-11-17 14:33:59.529327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.798 [2024-11-17 14:33:59.529359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.798 [2024-11-17 14:33:59.529386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.798 [2024-11-17 14:33:59.529413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.798 [2024-11-17 14:33:59.529440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.798 [2024-11-17 14:33:59.529470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.798 [2024-11-17 14:33:59.529499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.798 [2024-11-17 14:33:59.529527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.798 [2024-11-17 14:33:59.529555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.798 [2024-11-17 14:33:59.529582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.529600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.798 [2024-11-17 14:33:59.529610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.530400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.798 [2024-11-17 14:33:59.530417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.530437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.798 [2024-11-17 14:33:59.530449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.530467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.798 [2024-11-17 14:33:59.530477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.530494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.798 [2024-11-17 14:33:59.530505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:26.798 [2024-11-17 14:33:59.530522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.530532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.530550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.530559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.530576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.530586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.530606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.530616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.530633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.530643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.530660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.530669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.530687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.530696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.530713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.530723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.530740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.530750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.530768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.530777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.530794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.530804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.530821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.530831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.530849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.530859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.530877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.530887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.530905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.530915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.530934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.530944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.530962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.530972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.530990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.531018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.531045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.531072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.531099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.531126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.531153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.531179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.531206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.531232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.531260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.531291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.531319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.531346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.531379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.531407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.531434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.531461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.531488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.531516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.531543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:26.799 [2024-11-17 14:33:59.531570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.799 [2024-11-17 14:33:59.531581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.531597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.531609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.531626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.531636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.531653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.531662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.531680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.531690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.531707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.800 [2024-11-17 14:33:59.531717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.531734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.800 [2024-11-17 14:33:59.531743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.531761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.800 [2024-11-17 14:33:59.531772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.531789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.800 [2024-11-17 14:33:59.531799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.531817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.800 [2024-11-17 14:33:59.531827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.531845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.800 [2024-11-17 14:33:59.531854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.532492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.800 [2024-11-17 14:33:59.532510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.532530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.532540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.532557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.532567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.532587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.532596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.532614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.532624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.532640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.532651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.532668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.532677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.532694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.532704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.532721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.532731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.532748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.532757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.532774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.532784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.532800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.532810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.532827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.532839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.532856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.532866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.532883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.532893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.532912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.532922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.532939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.532949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.532967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.532976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.532993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.533003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.533020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.533030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.533047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.533056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.533073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.533082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.533100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.533109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:26.800 [2024-11-17 14:33:59.533126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.800 [2024-11-17 14:33:59.533135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.801 [2024-11-17 14:33:59.533162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.801 [2024-11-17 14:33:59.533188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.533976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.533986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.534003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.534012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.534030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.534039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.534057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.534067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.534084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.801 [2024-11-17 14:33:59.534093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.534111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.801 [2024-11-17 14:33:59.534120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.534137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.801 [2024-11-17 14:33:59.539523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.539544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.801 [2024-11-17 14:33:59.539552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:26.801 [2024-11-17 14:33:59.539568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.539577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.539593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.539602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.539620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.539629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.539645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.539653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.539669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.539677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.539693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.539702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.539717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.539726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.539741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.539750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.539765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.539774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.539790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.539798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.539813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.539823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.539838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.539847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.540597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.540615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.540634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.540643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.540662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.540672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.540689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.540698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.540714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.540725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.540741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.540751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.540767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.540777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.540792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.540801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.540816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.540826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.540841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.540850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.540865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.540874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.540890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.540900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.540915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.540924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.540940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.540949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.540966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.540975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.540990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.540998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.541014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.541023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.541038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.541047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.541062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.541071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.541088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.541097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.541113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.541122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.541138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.541147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.541162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.541171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.541186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.541195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.541210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.541219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.541235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.802 [2024-11-17 14:33:59.541244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:26.802 [2024-11-17 14:33:59.541259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.541797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.803 [2024-11-17 14:33:59.541821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.803 [2024-11-17 14:33:59.541845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.803 [2024-11-17 14:33:59.541872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.803 [2024-11-17 14:33:59.541900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.541916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.803 [2024-11-17 14:33:59.541926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.542480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.803 [2024-11-17 14:33:59.542495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.542513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.803 [2024-11-17 14:33:59.542522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.542538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.542548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.542563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.542572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.542588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.542597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.542613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.542622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.542638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.542648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.542665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.542674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.542691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.542699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.542715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.542724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.542743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.542752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.803 [2024-11-17 14:33:59.542768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.803 [2024-11-17 14:33:59.542776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.542792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.804 [2024-11-17 14:33:59.542801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.542816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.804 [2024-11-17 14:33:59.542826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.542842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.804 [2024-11-17 14:33:59.542851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.542867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.804 [2024-11-17 14:33:59.542875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.542891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.804 [2024-11-17 14:33:59.542900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.542915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.804 [2024-11-17 14:33:59.542924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.542940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.804 [2024-11-17 14:33:59.542948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.542965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.804 [2024-11-17 14:33:59.542974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.542989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.804 [2024-11-17 14:33:59.542998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.804 [2024-11-17 14:33:59.543023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.804 [2024-11-17 14:33:59.543048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.804 [2024-11-17 14:33:59.543073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.804 [2024-11-17 14:33:59.543098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.804 [2024-11-17 14:33:59.543123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.804 [2024-11-17 14:33:59.543147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:26.804 [2024-11-17 14:33:59.543685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.804 [2024-11-17 14:33:59.543694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.543709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.805 [2024-11-17 14:33:59.543718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.543733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.805 [2024-11-17 14:33:59.543742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.543758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.805 [2024-11-17 14:33:59.543766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.543782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.805 [2024-11-17 14:33:59.543790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.543805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.805 [2024-11-17 14:33:59.543815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.543830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.805 [2024-11-17 14:33:59.543839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.543855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.805 [2024-11-17 14:33:59.543864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.543879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.805 [2024-11-17 14:33:59.543888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.543903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.805 [2024-11-17 14:33:59.543911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.543927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.805 [2024-11-17 14:33:59.543936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.543954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.805 [2024-11-17 14:33:59.543963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.543979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.543988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.544003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.544015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.544030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.544040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.544055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.544064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.544080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.544089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.544105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.544114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.544129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.544139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.544154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.544163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.544179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.544188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.544203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.544212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.544227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.544236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.544253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.544263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.544277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.544286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.544302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.544312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.545025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.545041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.545059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.545068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.545084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.545093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.545108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.545118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.545134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.545143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.545158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.545167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.545183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.545191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.545207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.545217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.545232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.545242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.545257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.545270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:26.805 [2024-11-17 14:33:59.545286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.805 [2024-11-17 14:33:59.545295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.545989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.545998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.546014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.546023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.546038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.546047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.546062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.546071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.546086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.546095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.546110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.546119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.546135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.546144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.546160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.546169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.546186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.546200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.546215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.546224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.546240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.806 [2024-11-17 14:33:59.546248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:26.806 [2024-11-17 14:33:59.546264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.806 [2024-11-17 14:33:59.546273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.546288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.807 [2024-11-17 14:33:59.546297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.546313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.807 [2024-11-17 14:33:59.546322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.546338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.807 [2024-11-17 14:33:59.546347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.546918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.807 [2024-11-17 14:33:59.546934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.546962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.807 [2024-11-17 14:33:59.546972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.546988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.807 [2024-11-17 14:33:59.546998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.807 [2024-11-17 14:33:59.547648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.807 [2024-11-17 14:33:59.547674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.807 [2024-11-17 14:33:59.547701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.807 [2024-11-17 14:33:59.547728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.807 [2024-11-17 14:33:59.547753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.807 [2024-11-17 14:33:59.547778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.807 [2024-11-17 14:33:59.547803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.807 [2024-11-17 14:33:59.547829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:26.807 [2024-11-17 14:33:59.547845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.547854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.547870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.547879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.547895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.547905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.547920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.547930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.547946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.547955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.547971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.547980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.547996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.548007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.548033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.548058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.548083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.548109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.548134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.548160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.548186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.548211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.548236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.548270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.548296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.548322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.548349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.548379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.548404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.548429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.548455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.548481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.808 [2024-11-17 14:33:59.548515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.808 [2024-11-17 14:33:59.548541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.808 [2024-11-17 14:33:59.548566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.808 [2024-11-17 14:33:59.548591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.808 [2024-11-17 14:33:59.548617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.808 [2024-11-17 14:33:59.548642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.808 [2024-11-17 14:33:59.548670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.808 [2024-11-17 14:33:59.548696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.808 [2024-11-17 14:33:59.548721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.808 [2024-11-17 14:33:59.548746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.808 [2024-11-17 14:33:59.548771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.808 [2024-11-17 14:33:59.548797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.808 [2024-11-17 14:33:59.548822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:26.808 [2024-11-17 14:33:59.548838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.808 [2024-11-17 14:33:59.548848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.549584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.549601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.549620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.549629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.549646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.549655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.549674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.549683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.549699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.549713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.549730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.549739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.549755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.549765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.549781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.549790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.549806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.549816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.549831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.549841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.549856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.549865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.549881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.549891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.549907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.549916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.549932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.549941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.549958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.549967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.549983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.549992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.550007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.550021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.550038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.550047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.550062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.550072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.550090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.550099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.550115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.550125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.550142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.550151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.550168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.550178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.550194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.550204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.550220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.550230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.550246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.550255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.550271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.550281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.550297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.550307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.550323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.550333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.550350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.550364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.550381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.550390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.550407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.550416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.550434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.550444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.809 [2024-11-17 14:33:59.550460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.809 [2024-11-17 14:33:59.550469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.550486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.550495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.550511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.550521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.550537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.550546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.550562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.550572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.550588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.550597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.550614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.550623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.550639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.550648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.550666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.550676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.550691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.550701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.550717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.550728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.550745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.550755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.550771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.550780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.550796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.550806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.550822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.550832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.550847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.550856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.550872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.550881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.550897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.810 [2024-11-17 14:33:59.550907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.550923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.810 [2024-11-17 14:33:59.550932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.550950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.810 [2024-11-17 14:33:59.550959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.551562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.810 [2024-11-17 14:33:59.551578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.551597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.810 [2024-11-17 14:33:59.551607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.551623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.810 [2024-11-17 14:33:59.551632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.551649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.810 [2024-11-17 14:33:59.551658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.551674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.551684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.551700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.551709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.551725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.551735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.551750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.551760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.551776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.551787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.551804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.551813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.551829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.551839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.551855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.551865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.551880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.551892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.551908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.551918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.551934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.551943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.551959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.551970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.551986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.551996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.552013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.552022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.552039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.810 [2024-11-17 14:33:59.552047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:26.810 [2024-11-17 14:33:59.552063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.811 [2024-11-17 14:33:59.552073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.811 [2024-11-17 14:33:59.552099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.811 [2024-11-17 14:33:59.552125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.811 [2024-11-17 14:33:59.552150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.811 [2024-11-17 14:33:59.552176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.811 [2024-11-17 14:33:59.552203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.811 [2024-11-17 14:33:59.552230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.811 [2024-11-17 14:33:59.552255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.811 [2024-11-17 14:33:59.552280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.811 [2024-11-17 14:33:59.552305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.552979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.552988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.553005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.553014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.553030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.553039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:26.811 [2024-11-17 14:33:59.553055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.811 [2024-11-17 14:33:59.553076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.553092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.812 [2024-11-17 14:33:59.553101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.553117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.812 [2024-11-17 14:33:59.553126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.553141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.812 [2024-11-17 14:33:59.553151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.553169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.812 [2024-11-17 14:33:59.553179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.553195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.553204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.553220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.553230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.553247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.553257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.553273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.553282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.553299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.553308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.553324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.553333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.553349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.553363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.553379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.553389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.553405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.553414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.553430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.553440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.553456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.553465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.553481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.553493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.812 [2024-11-17 14:33:59.554782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.812 [2024-11-17 14:33:59.554792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.554809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.554818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.554835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.554844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.554862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.554871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.554887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.554896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.554913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.554922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.554938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.554947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.554963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.554972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.554988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.554996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.555022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.555046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.555071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.555097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.555121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.555147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.555174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.555199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.555226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.555251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.555276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.555301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.555327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.555356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.555382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.555408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.555434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.555459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.555487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.555513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.813 [2024-11-17 14:33:59.555538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.813 [2024-11-17 14:33:59.555564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.555581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.813 [2024-11-17 14:33:59.555590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.556172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.813 [2024-11-17 14:33:59.556187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.556206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.813 [2024-11-17 14:33:59.556215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.556231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.813 [2024-11-17 14:33:59.556241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:26.813 [2024-11-17 14:33:59.556257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.813 [2024-11-17 14:33:59.556266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.814 [2024-11-17 14:33:59.556291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.814 [2024-11-17 14:33:59.556950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.814 [2024-11-17 14:33:59.556975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.556991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.814 [2024-11-17 14:33:59.557000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.557030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.814 [2024-11-17 14:33:59.557038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.557051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.814 [2024-11-17 14:33:59.557058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.557071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.814 [2024-11-17 14:33:59.557079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.557091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.814 [2024-11-17 14:33:59.557099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.557111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.814 [2024-11-17 14:33:59.557119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.557132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.814 [2024-11-17 14:33:59.557139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.557152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.814 [2024-11-17 14:33:59.557159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.557172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.814 [2024-11-17 14:33:59.557180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.557193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.814 [2024-11-17 14:33:59.557200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.557213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.814 [2024-11-17 14:33:59.557220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:26.814 [2024-11-17 14:33:59.557233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.815 [2024-11-17 14:33:59.557240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.815 [2024-11-17 14:33:59.557261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.815 [2024-11-17 14:33:59.557283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.815 [2024-11-17 14:33:59.557304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.815 [2024-11-17 14:33:59.557326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.815 [2024-11-17 14:33:59.557345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.815 [2024-11-17 14:33:59.557371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.815 [2024-11-17 14:33:59.557391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.815 [2024-11-17 14:33:59.557412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.815 [2024-11-17 14:33:59.557441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.815 [2024-11-17 14:33:59.557462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.815 [2024-11-17 14:33:59.557482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.815 [2024-11-17 14:33:59.557502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.815 [2024-11-17 14:33:59.557523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.815 [2024-11-17 14:33:59.557545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.815 [2024-11-17 14:33:59.557566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.815 [2024-11-17 14:33:59.557585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.815 [2024-11-17 14:33:59.557605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.815 [2024-11-17 14:33:59.557627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.815 [2024-11-17 14:33:59.557647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.815 [2024-11-17 14:33:59.557666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.815 [2024-11-17 14:33:59.557687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.815 [2024-11-17 14:33:59.557708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.815 [2024-11-17 14:33:59.557741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.815 [2024-11-17 14:33:59.557762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.815 [2024-11-17 14:33:59.557782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.815 [2024-11-17 14:33:59.557802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.815 [2024-11-17 14:33:59.557822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.815 [2024-11-17 14:33:59.557843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.815 [2024-11-17 14:33:59.557864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.815 [2024-11-17 14:33:59.557884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.557897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.815 [2024-11-17 14:33:59.557905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.558485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.815 [2024-11-17 14:33:59.558499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.558514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.815 [2024-11-17 14:33:59.558521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.558535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.815 [2024-11-17 14:33:59.558542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.558555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.815 [2024-11-17 14:33:59.558563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.558575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.815 [2024-11-17 14:33:59.558583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.815 [2024-11-17 14:33:59.558596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.815 [2024-11-17 14:33:59.558603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.558616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.558627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.558640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.558648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.558660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.558668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.558681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.558688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.558701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.558708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.558721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.558729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.558742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.558749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.558762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.558769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.558782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.558790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.558803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.558811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.558824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.558831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.558844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.558852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.558864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.558872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.558885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.558893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.558905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.558913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.558925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.558933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.558945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.558953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.558966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.558973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.558986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.558993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.559008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.559015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.559028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.559035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.559048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.559056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.559068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.559077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.559090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.559097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.559109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.559117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.559132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.559139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.559152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.559159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.559172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.559179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.559192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.559199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.559211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.559219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.559233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.559241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.559253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.559261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.559273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.559281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.559295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.559302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.559315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.559322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.559334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.559342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.559361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.559369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.559382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.816 [2024-11-17 14:33:59.559391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:26.816 [2024-11-17 14:33:59.559404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.559411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.559424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.559432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.559444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.559452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.559465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.559473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.559486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.559493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.559506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.559513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.559526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.559533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.559546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.559553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.559566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.817 [2024-11-17 14:33:59.559574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.817 [2024-11-17 14:33:59.560046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.817 [2024-11-17 14:33:59.560068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.817 [2024-11-17 14:33:59.560092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.817 [2024-11-17 14:33:59.560112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.817 [2024-11-17 14:33:59.560133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.817 [2024-11-17 14:33:59.560153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:26.817 [2024-11-17 14:33:59.560885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.817 [2024-11-17 14:33:59.560895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.560909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.818 [2024-11-17 14:33:59.560917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.560931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.560938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.560951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.560958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.560971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.560980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.560993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:26.818 [2024-11-17 14:33:59.561593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.818 [2024-11-17 14:33:59.561602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.561614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.561622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.561635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.561643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.561656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.561664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.561676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.561684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.561696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.561703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.561717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.561724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:26.819 [2024-11-17 14:33:59.562714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.819 [2024-11-17 14:33:59.562722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.562735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.562742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.562755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.562762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.562775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.562782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.562794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.562802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.562814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.562822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.562834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.562842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.562854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.562862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.562874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.562882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.562894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.562901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.562915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.562923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.562936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.562945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.562958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.562966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.562979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.562986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.562999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.566588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.566604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.566612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.566625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.566632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.566645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.566652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.566665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.566672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.566685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.566693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.566706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.566713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.566726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.566734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.566747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.566757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.566770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.566777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.566790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.566797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.566809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.820 [2024-11-17 14:33:59.566816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.566829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.820 [2024-11-17 14:33:59.566836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.566849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.820 [2024-11-17 14:33:59.566859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.566872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.820 [2024-11-17 14:33:59.566880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.566895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.820 [2024-11-17 14:33:59.566902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.566915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.820 [2024-11-17 14:33:59.566922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.566935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.820 [2024-11-17 14:33:59.566943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.567497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.567511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.567527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.567534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.567547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.567555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.567570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.567578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.567592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.567599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.567612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.567620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.567633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.820 [2024-11-17 14:33:59.567640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:26.820 [2024-11-17 14:33:59.567653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.821 [2024-11-17 14:33:59.567660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.567673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.821 [2024-11-17 14:33:59.567680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.567693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.821 [2024-11-17 14:33:59.567700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.567713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.821 [2024-11-17 14:33:59.567720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.567733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.821 [2024-11-17 14:33:59.567741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.567754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.821 [2024-11-17 14:33:59.567762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.567776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.821 [2024-11-17 14:33:59.567783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.567796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.821 [2024-11-17 14:33:59.567803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.567817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.821 [2024-11-17 14:33:59.567824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.567837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.821 [2024-11-17 14:33:59.567844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.567857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.821 [2024-11-17 14:33:59.567865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.567877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.821 [2024-11-17 14:33:59.567885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.567897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.821 [2024-11-17 14:33:59.567905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.567917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.821 [2024-11-17 14:33:59.567924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.567937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.821 [2024-11-17 14:33:59.567945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.567958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.821 [2024-11-17 14:33:59.567965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.567978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.821 [2024-11-17 14:33:59.567986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.567998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.821 [2024-11-17 14:33:59.568005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.568018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.821 [2024-11-17 14:33:59.568025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.568038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.821 [2024-11-17 14:33:59.568045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.568060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.821 [2024-11-17 14:33:59.568068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.568081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.821 [2024-11-17 14:33:59.568088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.568102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.821 [2024-11-17 14:33:59.568110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.568122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.821 [2024-11-17 14:33:59.568130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.568142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.821 [2024-11-17 14:33:59.568150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.568162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.821 [2024-11-17 14:33:59.568170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.568182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.821 [2024-11-17 14:33:59.568189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.568202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.821 [2024-11-17 14:33:59.568209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.568222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.821 [2024-11-17 14:33:59.568230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.568242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.821 [2024-11-17 14:33:59.568250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.568262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.821 [2024-11-17 14:33:59.568270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.568283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.821 [2024-11-17 14:33:59.568291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.568304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.821 [2024-11-17 14:33:59.568312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.568326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.821 [2024-11-17 14:33:59.568333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.568346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.821 [2024-11-17 14:33:59.568358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.568372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.821 [2024-11-17 14:33:59.568379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.568392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.821 [2024-11-17 14:33:59.568400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.568414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.821 [2024-11-17 14:33:59.568422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:26.821 [2024-11-17 14:33:59.568435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.821 [2024-11-17 14:33:59.568443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.568456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.822 [2024-11-17 14:33:59.568463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.568476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.822 [2024-11-17 14:33:59.568483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.568498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.822 [2024-11-17 14:33:59.568505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.568518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.822 [2024-11-17 14:33:59.568526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.568539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.822 [2024-11-17 14:33:59.568547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.568560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.822 [2024-11-17 14:33:59.568568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.568581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.822 [2024-11-17 14:33:59.568588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.568602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.822 [2024-11-17 14:33:59.568609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.568622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.822 [2024-11-17 14:33:59.568629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.568641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.822 [2024-11-17 14:33:59.568649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.568662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.822 [2024-11-17 14:33:59.568669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.568682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.822 [2024-11-17 14:33:59.568689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.568701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.568709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.568721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.568731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.568744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.568751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.568764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.568771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.568784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.568791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.569279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.569290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.569307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.569316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.569329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.569336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.569349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.569362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.569375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.569382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.569395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.569403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.569416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.569423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.569436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.569443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.569456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.569463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.569476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.569483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.569496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.569504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.569516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.569524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.569536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.569544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.569558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.569566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.569579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.569587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.569600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.569607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.569619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.569627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:26.822 [2024-11-17 14:33:59.569640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.822 [2024-11-17 14:33:59.569647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.569660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.569667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.569680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.569687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.569700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.569708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.569720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.569728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.569741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.569749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.569762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.569769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.569781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.569789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.569803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.569810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.569823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.569830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.569843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.569850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.569863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.569871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.569885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.569892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.569905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.569912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.569926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.569933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.569945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.569953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.569965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.569972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.569985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.569992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.570012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.570032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.570054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.570074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.570095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.570114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.570134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.570154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.570177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.570198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.570217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.570238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.570259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.570278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.570300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.570320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.570341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.570366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.570386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.570406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.823 [2024-11-17 14:33:59.570426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:26.823 [2024-11-17 14:33:59.570439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.570447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.570461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.570468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.570481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.824 [2024-11-17 14:33:59.570488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.570502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.824 [2024-11-17 14:33:59.570509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.570521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.824 [2024-11-17 14:33:59.570534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.570547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.824 [2024-11-17 14:33:59.570555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.570571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.824 [2024-11-17 14:33:59.570579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.570593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.824 [2024-11-17 14:33:59.570600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.824 [2024-11-17 14:33:59.571162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.824 [2024-11-17 14:33:59.571688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.824 [2024-11-17 14:33:59.571708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.824 [2024-11-17 14:33:59.571728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.824 [2024-11-17 14:33:59.571749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.824 [2024-11-17 14:33:59.571770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:26.824 [2024-11-17 14:33:59.571783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.824 [2024-11-17 14:33:59.571790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.571802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.571810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.571823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.571831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.571844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.571852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.571864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.571874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.571887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.571894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.571907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.571914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.571926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.571934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.571946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.571954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.571966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.571973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.571986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.571994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.572013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.572032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.572053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.572073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.572094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.572115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.572135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.572157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.572177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.572197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.572218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.572238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.572258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.572278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.572298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.572318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.572338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.825 [2024-11-17 14:33:59.572364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.825 [2024-11-17 14:33:59.572393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.825 [2024-11-17 14:33:59.572414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.825 [2024-11-17 14:33:59.572443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.825 [2024-11-17 14:33:59.572464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.825 [2024-11-17 14:33:59.572819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.825 [2024-11-17 14:33:59.572855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.825 [2024-11-17 14:33:59.572879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.825 [2024-11-17 14:33:59.572903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.825 [2024-11-17 14:33:59.572926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.825 [2024-11-17 14:33:59.572951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:26.825 [2024-11-17 14:33:59.572967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.825 [2024-11-17 14:33:59.572974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.572990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.572997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.826 [2024-11-17 14:33:59.573945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:26.826 [2024-11-17 14:33:59.573961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:33:59.573969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:33:59.573986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:33:59.573994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:33:59.574011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:33:59.574017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:33:59.574035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:33:59.574042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:33:59.574058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:33:59.574066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:33:59.574083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:33:59.574090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:33:59.574106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:33:59.574114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:33:59.574131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:33:59.574139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:33:59.574155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:33:59.574163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:33:59.574180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:33:59.574187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:33:59.574205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:33:59.574212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:33:59.574229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:33:59.574236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:33:59.574255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:33:59.574271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:33:59.574288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:33:59.574296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:33:59.574313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.827 [2024-11-17 14:33:59.574320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:33:59.574337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.827 [2024-11-17 14:33:59.574344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:33:59.574365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.827 [2024-11-17 14:33:59.574374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:33:59.574392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.827 [2024-11-17 14:33:59.574400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:33:59.574417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.827 [2024-11-17 14:33:59.574425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:33:59.574443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.827 [2024-11-17 14:33:59.574450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:26.827 11209.92 IOPS, 43.79 MiB/s [2024-11-17T13:34:16.052Z] 10409.21 IOPS, 40.66 MiB/s [2024-11-17T13:34:16.052Z] 9715.27 IOPS, 37.95 MiB/s [2024-11-17T13:34:16.052Z] 9165.75 IOPS, 35.80 MiB/s [2024-11-17T13:34:16.052Z] 9287.59 IOPS, 36.28 MiB/s [2024-11-17T13:34:16.052Z] 9386.17 IOPS, 36.66 MiB/s [2024-11-17T13:34:16.052Z] 9550.47 IOPS, 37.31 MiB/s [2024-11-17T13:34:16.052Z] 9733.20 IOPS, 38.02 MiB/s [2024-11-17T13:34:16.052Z] 9906.43 IOPS, 38.70 MiB/s [2024-11-17T13:34:16.052Z] 9973.59 IOPS, 38.96 MiB/s [2024-11-17T13:34:16.052Z] 10022.74 IOPS, 39.15 MiB/s [2024-11-17T13:34:16.052Z] 10067.12 IOPS, 39.32 MiB/s [2024-11-17T13:34:16.052Z] 10199.24 IOPS, 39.84 MiB/s [2024-11-17T13:34:16.052Z] 10328.15 IOPS, 40.34 MiB/s [2024-11-17T13:34:16.052Z] [2024-11-17 14:34:13.307932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:34:13.307974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:34:13.308007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:34:13.308016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:34:13.308030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:34:13.308042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:34:13.308055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:34:13.308063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:34:13.308076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:34:13.308083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:34:13.308096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:34:13.308103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:34:13.308116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:34:13.308123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:34:13.308135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:34:13.308142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:34:13.308156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:34:13.308163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:34:13.308176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:34:13.308184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:34:13.308197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.827 [2024-11-17 14:34:13.308204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:26.827 [2024-11-17 14:34:13.308218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.308225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.308239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.308246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.308258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.308266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.308280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.308288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.308304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.308311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.308325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.308334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.308349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.308363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.308376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.308383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.308732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.308748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.308763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.308771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.308783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.308791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.308803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.308811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.308824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.308831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.308843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.308850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.308863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.308871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.308884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.308891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.308907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.828 [2024-11-17 14:34:13.308914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.308926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.308934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.308946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.308953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.308966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.308973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.308985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.308992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.309005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.309012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.309025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.309032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.309045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.309052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.310431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.310452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.310469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.310477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.310490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.310497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.310510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.310517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.310530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.310540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.310553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.310560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.310572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.310580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.310592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.310600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.310612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.310619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.310632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.310639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.310651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.310659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.310671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.310678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.310691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.310697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.310710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.310717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.310730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.828 [2024-11-17 14:34:13.310737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:26.828 [2024-11-17 14:34:13.310750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.829 [2024-11-17 14:34:13.310757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:26.829 10411.22 IOPS, 40.67 MiB/s [2024-11-17T13:34:16.054Z] 10444.00 IOPS, 40.80 MiB/s [2024-11-17T13:34:16.054Z] Received shutdown signal, test time was about 28.990356 seconds 00:24:26.829 00:24:26.829 Latency(us) 00:24:26.829 [2024-11-17T13:34:16.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.829 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:26.829 Verification LBA range: start 0x0 length 0x4000 00:24:26.829 Nvme0n1 : 28.99 10469.90 40.90 0.00 0.00 12205.14 1196.74 3078254.41 00:24:26.829 [2024-11-17T13:34:16.054Z] =================================================================================================================== 00:24:26.829 [2024-11-17T13:34:16.054Z] Total : 10469.90 40.90 0.00 0.00 12205.14 1196.74 3078254.41 00:24:26.829 14:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:27.088 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:27.089 rmmod nvme_tcp 00:24:27.089 rmmod nvme_fabrics 00:24:27.089 rmmod nvme_keyring 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1572521 ']' 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1572521 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1572521 ']' 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1572521 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1572521 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1572521' 00:24:27.089 killing process with pid 1572521 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1572521 00:24:27.089 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1572521 00:24:27.348 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:27.348 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:27.348 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:27.348 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:27.348 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:27.348 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:27.348 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:27.348 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:27.348 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:27.348 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.348 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.348 14:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.256 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:29.256 00:24:29.256 real 0m40.864s 00:24:29.256 user 1m50.779s 00:24:29.256 sys 0m11.708s 00:24:29.256 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:29.256 14:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:29.256 ************************************ 00:24:29.256 END TEST nvmf_host_multipath_status 00:24:29.256 ************************************ 00:24:29.256 14:34:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:29.256 14:34:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:29.256 14:34:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:29.256 14:34:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.521 ************************************ 00:24:29.521 START TEST nvmf_discovery_remove_ifc 00:24:29.521 ************************************ 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:29.521 * Looking for test storage... 00:24:29.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:29.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.521 --rc genhtml_branch_coverage=1 00:24:29.521 --rc genhtml_function_coverage=1 00:24:29.521 --rc genhtml_legend=1 00:24:29.521 --rc geninfo_all_blocks=1 00:24:29.521 --rc geninfo_unexecuted_blocks=1 00:24:29.521 00:24:29.521 ' 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:29.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.521 --rc genhtml_branch_coverage=1 00:24:29.521 --rc genhtml_function_coverage=1 00:24:29.521 --rc genhtml_legend=1 00:24:29.521 --rc geninfo_all_blocks=1 00:24:29.521 --rc geninfo_unexecuted_blocks=1 00:24:29.521 00:24:29.521 ' 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:29.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.521 --rc genhtml_branch_coverage=1 00:24:29.521 --rc genhtml_function_coverage=1 00:24:29.521 --rc genhtml_legend=1 00:24:29.521 --rc geninfo_all_blocks=1 00:24:29.521 --rc geninfo_unexecuted_blocks=1 00:24:29.521 00:24:29.521 ' 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:29.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.521 --rc genhtml_branch_coverage=1 00:24:29.521 --rc genhtml_function_coverage=1 00:24:29.521 --rc genhtml_legend=1 00:24:29.521 --rc geninfo_all_blocks=1 00:24:29.521 --rc geninfo_unexecuted_blocks=1 00:24:29.521 00:24:29.521 ' 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:29.521 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:29.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:29.522 14:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:36.099 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:36.099 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:36.099 Found net devices under 0000:86:00.0: cvl_0_0 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:36.099 Found net devices under 0000:86:00.1: cvl_0_1 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:36.099 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:36.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:36.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:24:36.099 00:24:36.099 --- 10.0.0.2 ping statistics --- 00:24:36.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.099 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:36.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:36.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:24:36.100 00:24:36.100 --- 10.0.0.1 ping statistics --- 00:24:36.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.100 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1581538 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1581538 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1581538 ']' 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.100 [2024-11-17 14:34:24.678570] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:24:36.100 [2024-11-17 14:34:24.678619] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.100 [2024-11-17 14:34:24.759983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.100 [2024-11-17 14:34:24.798917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.100 [2024-11-17 14:34:24.798951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.100 [2024-11-17 14:34:24.798958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.100 [2024-11-17 14:34:24.798964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.100 [2024-11-17 14:34:24.798970] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.100 [2024-11-17 14:34:24.799532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.100 14:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.100 [2024-11-17 14:34:24.954586] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.100 [2024-11-17 14:34:24.962775] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:36.100 null0 00:24:36.100 [2024-11-17 14:34:24.994747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1581567 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1581567 /tmp/host.sock 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1581567 ']' 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:36.100 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.100 [2024-11-17 14:34:25.063557] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:24:36.100 [2024-11-17 14:34:25.063598] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581567 ] 00:24:36.100 [2024-11-17 14:34:25.136057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.100 [2024-11-17 14:34:25.179188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.100 14:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.480 [2024-11-17 14:34:26.320815] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:37.480 [2024-11-17 14:34:26.320833] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:37.480 [2024-11-17 14:34:26.320847] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:37.480 [2024-11-17 14:34:26.408125] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:37.480 [2024-11-17 14:34:26.635319] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:37.480 [2024-11-17 14:34:26.636092] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x18189f0:1 started. 00:24:37.480 [2024-11-17 14:34:26.637425] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:37.480 [2024-11-17 14:34:26.637463] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:37.480 [2024-11-17 14:34:26.637480] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:37.480 [2024-11-17 14:34:26.637493] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:37.480 [2024-11-17 14:34:26.637510] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:37.480 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.480 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:37.480 [2024-11-17 14:34:26.640346] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x18189f0 was disconnected and freed. delete nvme_qpair. 00:24:37.480 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:37.480 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.480 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:37.480 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.480 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:37.480 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.480 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:37.480 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.480 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:37.480 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:37.480 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:37.739 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:37.739 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:37.739 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.739 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:37.739 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:37.739 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:37.739 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.739 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.739 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.739 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:37.739 14:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:38.675 14:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:38.675 14:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.675 14:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:38.675 14:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.675 14:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:38.675 14:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.675 14:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:38.675 14:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.675 14:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:38.675 14:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:40.059 14:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:40.059 14:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.059 14:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:40.059 14:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.059 14:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:40.059 14:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.059 14:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:40.059 14:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.059 14:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:40.059 14:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:40.993 14:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:40.993 14:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.993 14:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.993 14:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.994 14:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:40.994 14:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:40.994 14:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:40.994 14:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.994 14:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:40.994 14:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:41.931 14:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:41.931 14:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.931 14:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:41.931 14:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.931 14:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.931 14:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.931 14:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:41.931 14:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.931 14:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:41.931 14:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:42.867 14:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:42.867 14:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:42.867 14:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:42.867 14:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.867 14:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:42.867 14:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:42.867 14:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:42.867 14:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.867 [2024-11-17 14:34:32.079081] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:42.867 [2024-11-17 14:34:32.079128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.867 [2024-11-17 14:34:32.079141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.867 [2024-11-17 14:34:32.079151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.867 [2024-11-17 14:34:32.079160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.867 [2024-11-17 14:34:32.079169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.867 [2024-11-17 14:34:32.079175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.867 [2024-11-17 14:34:32.079183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.867 [2024-11-17 14:34:32.079189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.867 [2024-11-17 14:34:32.079198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.867 [2024-11-17 14:34:32.079206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.867 [2024-11-17 14:34:32.079213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5220 is same with the state(6) to be set 00:24:42.867 14:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:42.867 14:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:43.127 [2024-11-17 14:34:32.089104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f5220 (9): Bad file descriptor 00:24:43.127 [2024-11-17 14:34:32.099139] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:43.127 [2024-11-17 14:34:32.099151] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:43.127 [2024-11-17 14:34:32.099156] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:43.127 [2024-11-17 14:34:32.099161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:43.127 [2024-11-17 14:34:32.099181] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:44.065 14:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:44.065 14:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.065 14:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:44.065 14:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.065 14:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:44.065 14:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:44.065 14:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:44.065 [2024-11-17 14:34:33.152431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:44.065 [2024-11-17 14:34:33.152514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f5220 with addr=10.0.0.2, port=4420 00:24:44.065 [2024-11-17 14:34:33.152548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f5220 is same with the state(6) to be set 00:24:44.065 [2024-11-17 14:34:33.152613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f5220 (9): Bad file descriptor 00:24:44.065 [2024-11-17 14:34:33.153576] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:44.065 [2024-11-17 14:34:33.153642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:44.065 [2024-11-17 14:34:33.153668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:44.065 [2024-11-17 14:34:33.153691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:44.065 [2024-11-17 14:34:33.153712] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:44.065 [2024-11-17 14:34:33.153728] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:44.065 [2024-11-17 14:34:33.153741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:44.065 [2024-11-17 14:34:33.153762] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:44.065 [2024-11-17 14:34:33.153777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:44.065 14:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.065 14:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:44.065 14:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:45.002 [2024-11-17 14:34:34.156300] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:45.002 [2024-11-17 14:34:34.156322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:45.002 [2024-11-17 14:34:34.156335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:45.002 [2024-11-17 14:34:34.156342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:45.002 [2024-11-17 14:34:34.156350] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:45.002 [2024-11-17 14:34:34.156361] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:45.002 [2024-11-17 14:34:34.156366] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:45.002 [2024-11-17 14:34:34.156370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:45.002 [2024-11-17 14:34:34.156391] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:45.002 [2024-11-17 14:34:34.156413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.002 [2024-11-17 14:34:34.156422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.002 [2024-11-17 14:34:34.156432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.002 [2024-11-17 14:34:34.156439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.002 [2024-11-17 14:34:34.156447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.002 [2024-11-17 14:34:34.156454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.002 [2024-11-17 14:34:34.156461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.002 [2024-11-17 14:34:34.156471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.002 [2024-11-17 14:34:34.156478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.002 [2024-11-17 14:34:34.156487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.002 [2024-11-17 14:34:34.156494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:45.002 [2024-11-17 14:34:34.157061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e4900 (9): Bad file descriptor 00:24:45.002 [2024-11-17 14:34:34.158072] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:45.002 [2024-11-17 14:34:34.158084] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:45.002 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:45.002 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.002 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:45.002 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.002 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:45.002 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:45.002 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:45.002 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.261 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:45.261 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:45.261 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.261 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:45.261 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:45.261 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.261 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:45.261 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:45.261 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.261 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:45.261 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:45.261 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.261 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:45.261 14:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:46.199 14:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:46.199 14:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:46.199 14:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:46.199 14:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.199 14:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:46.199 14:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:46.199 14:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:46.199 14:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.199 14:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:46.199 14:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:47.137 [2024-11-17 14:34:36.206864] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:47.137 [2024-11-17 14:34:36.206881] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:47.137 [2024-11-17 14:34:36.206893] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:47.137 [2024-11-17 14:34:36.334292] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:47.397 [2024-11-17 14:34:36.395919] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:47.397 [2024-11-17 14:34:36.396530] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x17e9760:1 started. 00:24:47.397 [2024-11-17 14:34:36.397554] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:47.397 [2024-11-17 14:34:36.397584] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:47.397 [2024-11-17 14:34:36.397601] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:47.397 [2024-11-17 14:34:36.397614] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:47.397 [2024-11-17 14:34:36.397620] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:47.397 [2024-11-17 14:34:36.405400] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x17e9760 was disconnected and freed. delete nvme_qpair. 00:24:47.397 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:47.397 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.397 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:47.397 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.397 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:47.397 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.397 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:47.397 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.397 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:47.397 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:47.397 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1581567 00:24:47.397 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1581567 ']' 00:24:47.397 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1581567 00:24:47.397 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:47.397 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.397 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1581567 00:24:47.397 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:47.397 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:47.397 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1581567' 00:24:47.397 killing process with pid 1581567 00:24:47.397 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1581567 00:24:47.397 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1581567 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:47.657 rmmod nvme_tcp 00:24:47.657 rmmod nvme_fabrics 00:24:47.657 rmmod nvme_keyring 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1581538 ']' 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1581538 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1581538 ']' 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1581538 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1581538 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1581538' 00:24:47.657 killing process with pid 1581538 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1581538 00:24:47.657 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1581538 00:24:47.916 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:47.916 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:47.916 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:47.916 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:47.916 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:47.916 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:47.916 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:47.916 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:47.916 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:47.916 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.916 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.916 14:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.821 14:34:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:49.821 00:24:49.821 real 0m20.532s 00:24:49.821 user 0m24.813s 00:24:49.821 sys 0m5.818s 00:24:49.821 14:34:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:49.821 14:34:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.821 ************************************ 00:24:49.821 END TEST nvmf_discovery_remove_ifc 00:24:49.821 ************************************ 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.081 ************************************ 00:24:50.081 START TEST nvmf_identify_kernel_target 00:24:50.081 ************************************ 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:50.081 * Looking for test storage... 00:24:50.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:50.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.081 --rc genhtml_branch_coverage=1 00:24:50.081 --rc genhtml_function_coverage=1 00:24:50.081 --rc genhtml_legend=1 00:24:50.081 --rc geninfo_all_blocks=1 00:24:50.081 --rc geninfo_unexecuted_blocks=1 00:24:50.081 00:24:50.081 ' 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:50.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.081 --rc genhtml_branch_coverage=1 00:24:50.081 --rc genhtml_function_coverage=1 00:24:50.081 --rc genhtml_legend=1 00:24:50.081 --rc geninfo_all_blocks=1 00:24:50.081 --rc geninfo_unexecuted_blocks=1 00:24:50.081 00:24:50.081 ' 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:50.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.081 --rc genhtml_branch_coverage=1 00:24:50.081 --rc genhtml_function_coverage=1 00:24:50.081 --rc genhtml_legend=1 00:24:50.081 --rc geninfo_all_blocks=1 00:24:50.081 --rc geninfo_unexecuted_blocks=1 00:24:50.081 00:24:50.081 ' 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:50.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.081 --rc genhtml_branch_coverage=1 00:24:50.081 --rc genhtml_function_coverage=1 00:24:50.081 --rc genhtml_legend=1 00:24:50.081 --rc geninfo_all_blocks=1 00:24:50.081 --rc geninfo_unexecuted_blocks=1 00:24:50.081 00:24:50.081 ' 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.081 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:50.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.082 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.342 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:50.342 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:50.342 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:50.342 14:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:56.917 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:56.917 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:56.917 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:56.917 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:56.917 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:56.917 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:56.917 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:56.917 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:56.917 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:56.917 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:56.917 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:56.918 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:56.918 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:56.918 Found net devices under 0000:86:00.0: cvl_0_0 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:56.918 Found net devices under 0000:86:00.1: cvl_0_1 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.918 14:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.918 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.918 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.918 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:56.918 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.918 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.918 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.918 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:56.918 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:56.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:24:56.918 00:24:56.918 --- 10.0.0.2 ping statistics --- 00:24:56.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.918 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:24:56.918 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:24:56.918 00:24:56.918 --- 10.0.0.1 ping statistics --- 00:24:56.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.918 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:24:56.918 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.918 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:56.918 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:56.918 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.918 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:56.918 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:56.918 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:56.919 14:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:58.827 Waiting for block devices as requested 00:24:58.827 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:59.087 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:59.087 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:59.087 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:59.346 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:59.346 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:59.346 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:59.346 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:59.606 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:59.606 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:59.606 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:59.865 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:59.865 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:59.865 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:00.123 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:00.123 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:00.123 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:00.381 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:00.381 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:00.381 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:00.381 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:00.381 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:00.381 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:00.381 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:00.381 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:00.381 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:00.381 No valid GPT data, bailing 00:25:00.381 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:00.381 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:00.381 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:00.381 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:00.382 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:00.382 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:00.382 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:00.382 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:00.382 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:00.382 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:00.382 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:00.382 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:00.382 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:00.382 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:00.382 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:00.382 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:00.382 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:00.382 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:00.382 00:25:00.382 Discovery Log Number of Records 2, Generation counter 2 00:25:00.382 =====Discovery Log Entry 0====== 00:25:00.382 trtype: tcp 00:25:00.382 adrfam: ipv4 00:25:00.382 subtype: current discovery subsystem 00:25:00.382 treq: not specified, sq flow control disable supported 00:25:00.382 portid: 1 00:25:00.382 trsvcid: 4420 00:25:00.382 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:00.382 traddr: 10.0.0.1 00:25:00.382 eflags: none 00:25:00.382 sectype: none 00:25:00.382 =====Discovery Log Entry 1====== 00:25:00.382 trtype: tcp 00:25:00.382 adrfam: ipv4 00:25:00.382 subtype: nvme subsystem 00:25:00.382 treq: not specified, sq flow control disable supported 00:25:00.382 portid: 1 00:25:00.382 trsvcid: 4420 00:25:00.382 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:00.382 traddr: 10.0.0.1 00:25:00.382 eflags: none 00:25:00.382 sectype: none 00:25:00.382 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:00.382 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:00.642 ===================================================== 00:25:00.642 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:00.642 ===================================================== 00:25:00.642 Controller Capabilities/Features 00:25:00.642 ================================ 00:25:00.642 Vendor ID: 0000 00:25:00.642 Subsystem Vendor ID: 0000 00:25:00.642 Serial Number: d37c7b079ed9592e7cc4 00:25:00.642 Model Number: Linux 00:25:00.642 Firmware Version: 6.8.9-20 00:25:00.642 Recommended Arb Burst: 0 00:25:00.642 IEEE OUI Identifier: 00 00 00 00:25:00.642 Multi-path I/O 00:25:00.642 May have multiple subsystem ports: No 00:25:00.642 May have multiple controllers: No 00:25:00.642 Associated with SR-IOV VF: No 00:25:00.642 Max Data Transfer Size: Unlimited 00:25:00.642 Max Number of Namespaces: 0 00:25:00.642 Max Number of I/O Queues: 1024 00:25:00.642 NVMe Specification Version (VS): 1.3 00:25:00.642 NVMe Specification Version (Identify): 1.3 00:25:00.642 Maximum Queue Entries: 1024 00:25:00.642 Contiguous Queues Required: No 00:25:00.642 Arbitration Mechanisms Supported 00:25:00.642 Weighted Round Robin: Not Supported 00:25:00.642 Vendor Specific: Not Supported 00:25:00.642 Reset Timeout: 7500 ms 00:25:00.642 Doorbell Stride: 4 bytes 00:25:00.642 NVM Subsystem Reset: Not Supported 00:25:00.642 Command Sets Supported 00:25:00.642 NVM Command Set: Supported 00:25:00.642 Boot Partition: Not Supported 00:25:00.642 Memory Page Size Minimum: 4096 bytes 00:25:00.642 Memory Page Size Maximum: 4096 bytes 00:25:00.642 Persistent Memory Region: Not Supported 00:25:00.642 Optional Asynchronous Events Supported 00:25:00.642 Namespace Attribute Notices: Not Supported 00:25:00.642 Firmware Activation Notices: Not Supported 00:25:00.642 ANA Change Notices: Not Supported 00:25:00.642 PLE Aggregate Log Change Notices: Not Supported 00:25:00.642 LBA Status Info Alert Notices: Not Supported 00:25:00.642 EGE Aggregate Log Change Notices: Not Supported 00:25:00.642 Normal NVM Subsystem Shutdown event: Not Supported 00:25:00.642 Zone Descriptor Change Notices: Not Supported 00:25:00.642 Discovery Log Change Notices: Supported 00:25:00.642 Controller Attributes 00:25:00.642 128-bit Host Identifier: Not Supported 00:25:00.642 Non-Operational Permissive Mode: Not Supported 00:25:00.642 NVM Sets: Not Supported 00:25:00.642 Read Recovery Levels: Not Supported 00:25:00.642 Endurance Groups: Not Supported 00:25:00.642 Predictable Latency Mode: Not Supported 00:25:00.642 Traffic Based Keep ALive: Not Supported 00:25:00.642 Namespace Granularity: Not Supported 00:25:00.642 SQ Associations: Not Supported 00:25:00.642 UUID List: Not Supported 00:25:00.642 Multi-Domain Subsystem: Not Supported 00:25:00.642 Fixed Capacity Management: Not Supported 00:25:00.642 Variable Capacity Management: Not Supported 00:25:00.642 Delete Endurance Group: Not Supported 00:25:00.642 Delete NVM Set: Not Supported 00:25:00.643 Extended LBA Formats Supported: Not Supported 00:25:00.643 Flexible Data Placement Supported: Not Supported 00:25:00.643 00:25:00.643 Controller Memory Buffer Support 00:25:00.643 ================================ 00:25:00.643 Supported: No 00:25:00.643 00:25:00.643 Persistent Memory Region Support 00:25:00.643 ================================ 00:25:00.643 Supported: No 00:25:00.643 00:25:00.643 Admin Command Set Attributes 00:25:00.643 ============================ 00:25:00.643 Security Send/Receive: Not Supported 00:25:00.643 Format NVM: Not Supported 00:25:00.643 Firmware Activate/Download: Not Supported 00:25:00.643 Namespace Management: Not Supported 00:25:00.643 Device Self-Test: Not Supported 00:25:00.643 Directives: Not Supported 00:25:00.643 NVMe-MI: Not Supported 00:25:00.643 Virtualization Management: Not Supported 00:25:00.643 Doorbell Buffer Config: Not Supported 00:25:00.643 Get LBA Status Capability: Not Supported 00:25:00.643 Command & Feature Lockdown Capability: Not Supported 00:25:00.643 Abort Command Limit: 1 00:25:00.643 Async Event Request Limit: 1 00:25:00.643 Number of Firmware Slots: N/A 00:25:00.643 Firmware Slot 1 Read-Only: N/A 00:25:00.643 Firmware Activation Without Reset: N/A 00:25:00.643 Multiple Update Detection Support: N/A 00:25:00.643 Firmware Update Granularity: No Information Provided 00:25:00.643 Per-Namespace SMART Log: No 00:25:00.643 Asymmetric Namespace Access Log Page: Not Supported 00:25:00.643 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:00.643 Command Effects Log Page: Not Supported 00:25:00.643 Get Log Page Extended Data: Supported 00:25:00.643 Telemetry Log Pages: Not Supported 00:25:00.643 Persistent Event Log Pages: Not Supported 00:25:00.643 Supported Log Pages Log Page: May Support 00:25:00.643 Commands Supported & Effects Log Page: Not Supported 00:25:00.643 Feature Identifiers & Effects Log Page:May Support 00:25:00.643 NVMe-MI Commands & Effects Log Page: May Support 00:25:00.643 Data Area 4 for Telemetry Log: Not Supported 00:25:00.643 Error Log Page Entries Supported: 1 00:25:00.643 Keep Alive: Not Supported 00:25:00.643 00:25:00.643 NVM Command Set Attributes 00:25:00.643 ========================== 00:25:00.643 Submission Queue Entry Size 00:25:00.643 Max: 1 00:25:00.643 Min: 1 00:25:00.643 Completion Queue Entry Size 00:25:00.643 Max: 1 00:25:00.643 Min: 1 00:25:00.643 Number of Namespaces: 0 00:25:00.643 Compare Command: Not Supported 00:25:00.643 Write Uncorrectable Command: Not Supported 00:25:00.643 Dataset Management Command: Not Supported 00:25:00.643 Write Zeroes Command: Not Supported 00:25:00.643 Set Features Save Field: Not Supported 00:25:00.643 Reservations: Not Supported 00:25:00.643 Timestamp: Not Supported 00:25:00.643 Copy: Not Supported 00:25:00.643 Volatile Write Cache: Not Present 00:25:00.643 Atomic Write Unit (Normal): 1 00:25:00.643 Atomic Write Unit (PFail): 1 00:25:00.643 Atomic Compare & Write Unit: 1 00:25:00.643 Fused Compare & Write: Not Supported 00:25:00.643 Scatter-Gather List 00:25:00.643 SGL Command Set: Supported 00:25:00.643 SGL Keyed: Not Supported 00:25:00.643 SGL Bit Bucket Descriptor: Not Supported 00:25:00.643 SGL Metadata Pointer: Not Supported 00:25:00.643 Oversized SGL: Not Supported 00:25:00.643 SGL Metadata Address: Not Supported 00:25:00.643 SGL Offset: Supported 00:25:00.643 Transport SGL Data Block: Not Supported 00:25:00.643 Replay Protected Memory Block: Not Supported 00:25:00.643 00:25:00.643 Firmware Slot Information 00:25:00.643 ========================= 00:25:00.643 Active slot: 0 00:25:00.643 00:25:00.643 00:25:00.643 Error Log 00:25:00.643 ========= 00:25:00.643 00:25:00.643 Active Namespaces 00:25:00.643 ================= 00:25:00.643 Discovery Log Page 00:25:00.643 ================== 00:25:00.643 Generation Counter: 2 00:25:00.643 Number of Records: 2 00:25:00.643 Record Format: 0 00:25:00.643 00:25:00.643 Discovery Log Entry 0 00:25:00.643 ---------------------- 00:25:00.643 Transport Type: 3 (TCP) 00:25:00.643 Address Family: 1 (IPv4) 00:25:00.643 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:00.643 Entry Flags: 00:25:00.643 Duplicate Returned Information: 0 00:25:00.643 Explicit Persistent Connection Support for Discovery: 0 00:25:00.643 Transport Requirements: 00:25:00.643 Secure Channel: Not Specified 00:25:00.643 Port ID: 1 (0x0001) 00:25:00.643 Controller ID: 65535 (0xffff) 00:25:00.643 Admin Max SQ Size: 32 00:25:00.643 Transport Service Identifier: 4420 00:25:00.643 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:00.643 Transport Address: 10.0.0.1 00:25:00.643 Discovery Log Entry 1 00:25:00.643 ---------------------- 00:25:00.643 Transport Type: 3 (TCP) 00:25:00.643 Address Family: 1 (IPv4) 00:25:00.643 Subsystem Type: 2 (NVM Subsystem) 00:25:00.643 Entry Flags: 00:25:00.643 Duplicate Returned Information: 0 00:25:00.643 Explicit Persistent Connection Support for Discovery: 0 00:25:00.643 Transport Requirements: 00:25:00.643 Secure Channel: Not Specified 00:25:00.643 Port ID: 1 (0x0001) 00:25:00.643 Controller ID: 65535 (0xffff) 00:25:00.643 Admin Max SQ Size: 32 00:25:00.643 Transport Service Identifier: 4420 00:25:00.643 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:00.643 Transport Address: 10.0.0.1 00:25:00.643 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:00.643 get_feature(0x01) failed 00:25:00.643 get_feature(0x02) failed 00:25:00.643 get_feature(0x04) failed 00:25:00.643 ===================================================== 00:25:00.643 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:00.643 ===================================================== 00:25:00.643 Controller Capabilities/Features 00:25:00.643 ================================ 00:25:00.643 Vendor ID: 0000 00:25:00.643 Subsystem Vendor ID: 0000 00:25:00.643 Serial Number: 9a15f6b7d180e3fd7078 00:25:00.643 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:00.643 Firmware Version: 6.8.9-20 00:25:00.643 Recommended Arb Burst: 6 00:25:00.643 IEEE OUI Identifier: 00 00 00 00:25:00.643 Multi-path I/O 00:25:00.643 May have multiple subsystem ports: Yes 00:25:00.643 May have multiple controllers: Yes 00:25:00.643 Associated with SR-IOV VF: No 00:25:00.643 Max Data Transfer Size: Unlimited 00:25:00.643 Max Number of Namespaces: 1024 00:25:00.643 Max Number of I/O Queues: 128 00:25:00.643 NVMe Specification Version (VS): 1.3 00:25:00.643 NVMe Specification Version (Identify): 1.3 00:25:00.643 Maximum Queue Entries: 1024 00:25:00.643 Contiguous Queues Required: No 00:25:00.643 Arbitration Mechanisms Supported 00:25:00.643 Weighted Round Robin: Not Supported 00:25:00.643 Vendor Specific: Not Supported 00:25:00.643 Reset Timeout: 7500 ms 00:25:00.643 Doorbell Stride: 4 bytes 00:25:00.643 NVM Subsystem Reset: Not Supported 00:25:00.643 Command Sets Supported 00:25:00.643 NVM Command Set: Supported 00:25:00.643 Boot Partition: Not Supported 00:25:00.643 Memory Page Size Minimum: 4096 bytes 00:25:00.643 Memory Page Size Maximum: 4096 bytes 00:25:00.643 Persistent Memory Region: Not Supported 00:25:00.643 Optional Asynchronous Events Supported 00:25:00.643 Namespace Attribute Notices: Supported 00:25:00.643 Firmware Activation Notices: Not Supported 00:25:00.643 ANA Change Notices: Supported 00:25:00.643 PLE Aggregate Log Change Notices: Not Supported 00:25:00.643 LBA Status Info Alert Notices: Not Supported 00:25:00.643 EGE Aggregate Log Change Notices: Not Supported 00:25:00.643 Normal NVM Subsystem Shutdown event: Not Supported 00:25:00.643 Zone Descriptor Change Notices: Not Supported 00:25:00.643 Discovery Log Change Notices: Not Supported 00:25:00.643 Controller Attributes 00:25:00.643 128-bit Host Identifier: Supported 00:25:00.643 Non-Operational Permissive Mode: Not Supported 00:25:00.643 NVM Sets: Not Supported 00:25:00.643 Read Recovery Levels: Not Supported 00:25:00.643 Endurance Groups: Not Supported 00:25:00.643 Predictable Latency Mode: Not Supported 00:25:00.643 Traffic Based Keep ALive: Supported 00:25:00.643 Namespace Granularity: Not Supported 00:25:00.643 SQ Associations: Not Supported 00:25:00.643 UUID List: Not Supported 00:25:00.643 Multi-Domain Subsystem: Not Supported 00:25:00.643 Fixed Capacity Management: Not Supported 00:25:00.643 Variable Capacity Management: Not Supported 00:25:00.643 Delete Endurance Group: Not Supported 00:25:00.643 Delete NVM Set: Not Supported 00:25:00.643 Extended LBA Formats Supported: Not Supported 00:25:00.643 Flexible Data Placement Supported: Not Supported 00:25:00.643 00:25:00.643 Controller Memory Buffer Support 00:25:00.643 ================================ 00:25:00.644 Supported: No 00:25:00.644 00:25:00.644 Persistent Memory Region Support 00:25:00.644 ================================ 00:25:00.644 Supported: No 00:25:00.644 00:25:00.644 Admin Command Set Attributes 00:25:00.644 ============================ 00:25:00.644 Security Send/Receive: Not Supported 00:25:00.644 Format NVM: Not Supported 00:25:00.644 Firmware Activate/Download: Not Supported 00:25:00.644 Namespace Management: Not Supported 00:25:00.644 Device Self-Test: Not Supported 00:25:00.644 Directives: Not Supported 00:25:00.644 NVMe-MI: Not Supported 00:25:00.644 Virtualization Management: Not Supported 00:25:00.644 Doorbell Buffer Config: Not Supported 00:25:00.644 Get LBA Status Capability: Not Supported 00:25:00.644 Command & Feature Lockdown Capability: Not Supported 00:25:00.644 Abort Command Limit: 4 00:25:00.644 Async Event Request Limit: 4 00:25:00.644 Number of Firmware Slots: N/A 00:25:00.644 Firmware Slot 1 Read-Only: N/A 00:25:00.644 Firmware Activation Without Reset: N/A 00:25:00.644 Multiple Update Detection Support: N/A 00:25:00.644 Firmware Update Granularity: No Information Provided 00:25:00.644 Per-Namespace SMART Log: Yes 00:25:00.644 Asymmetric Namespace Access Log Page: Supported 00:25:00.644 ANA Transition Time : 10 sec 00:25:00.644 00:25:00.644 Asymmetric Namespace Access Capabilities 00:25:00.644 ANA Optimized State : Supported 00:25:00.644 ANA Non-Optimized State : Supported 00:25:00.644 ANA Inaccessible State : Supported 00:25:00.644 ANA Persistent Loss State : Supported 00:25:00.644 ANA Change State : Supported 00:25:00.644 ANAGRPID is not changed : No 00:25:00.644 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:00.644 00:25:00.644 ANA Group Identifier Maximum : 128 00:25:00.644 Number of ANA Group Identifiers : 128 00:25:00.644 Max Number of Allowed Namespaces : 1024 00:25:00.644 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:00.644 Command Effects Log Page: Supported 00:25:00.644 Get Log Page Extended Data: Supported 00:25:00.644 Telemetry Log Pages: Not Supported 00:25:00.644 Persistent Event Log Pages: Not Supported 00:25:00.644 Supported Log Pages Log Page: May Support 00:25:00.644 Commands Supported & Effects Log Page: Not Supported 00:25:00.644 Feature Identifiers & Effects Log Page:May Support 00:25:00.644 NVMe-MI Commands & Effects Log Page: May Support 00:25:00.644 Data Area 4 for Telemetry Log: Not Supported 00:25:00.644 Error Log Page Entries Supported: 128 00:25:00.644 Keep Alive: Supported 00:25:00.644 Keep Alive Granularity: 1000 ms 00:25:00.644 00:25:00.644 NVM Command Set Attributes 00:25:00.644 ========================== 00:25:00.644 Submission Queue Entry Size 00:25:00.644 Max: 64 00:25:00.644 Min: 64 00:25:00.644 Completion Queue Entry Size 00:25:00.644 Max: 16 00:25:00.644 Min: 16 00:25:00.644 Number of Namespaces: 1024 00:25:00.644 Compare Command: Not Supported 00:25:00.644 Write Uncorrectable Command: Not Supported 00:25:00.644 Dataset Management Command: Supported 00:25:00.644 Write Zeroes Command: Supported 00:25:00.644 Set Features Save Field: Not Supported 00:25:00.644 Reservations: Not Supported 00:25:00.644 Timestamp: Not Supported 00:25:00.644 Copy: Not Supported 00:25:00.644 Volatile Write Cache: Present 00:25:00.644 Atomic Write Unit (Normal): 1 00:25:00.644 Atomic Write Unit (PFail): 1 00:25:00.644 Atomic Compare & Write Unit: 1 00:25:00.644 Fused Compare & Write: Not Supported 00:25:00.644 Scatter-Gather List 00:25:00.644 SGL Command Set: Supported 00:25:00.644 SGL Keyed: Not Supported 00:25:00.644 SGL Bit Bucket Descriptor: Not Supported 00:25:00.644 SGL Metadata Pointer: Not Supported 00:25:00.644 Oversized SGL: Not Supported 00:25:00.644 SGL Metadata Address: Not Supported 00:25:00.644 SGL Offset: Supported 00:25:00.644 Transport SGL Data Block: Not Supported 00:25:00.644 Replay Protected Memory Block: Not Supported 00:25:00.644 00:25:00.644 Firmware Slot Information 00:25:00.644 ========================= 00:25:00.644 Active slot: 0 00:25:00.644 00:25:00.644 Asymmetric Namespace Access 00:25:00.644 =========================== 00:25:00.644 Change Count : 0 00:25:00.644 Number of ANA Group Descriptors : 1 00:25:00.644 ANA Group Descriptor : 0 00:25:00.644 ANA Group ID : 1 00:25:00.644 Number of NSID Values : 1 00:25:00.644 Change Count : 0 00:25:00.644 ANA State : 1 00:25:00.644 Namespace Identifier : 1 00:25:00.644 00:25:00.644 Commands Supported and Effects 00:25:00.644 ============================== 00:25:00.644 Admin Commands 00:25:00.644 -------------- 00:25:00.644 Get Log Page (02h): Supported 00:25:00.644 Identify (06h): Supported 00:25:00.644 Abort (08h): Supported 00:25:00.644 Set Features (09h): Supported 00:25:00.644 Get Features (0Ah): Supported 00:25:00.644 Asynchronous Event Request (0Ch): Supported 00:25:00.644 Keep Alive (18h): Supported 00:25:00.644 I/O Commands 00:25:00.644 ------------ 00:25:00.644 Flush (00h): Supported 00:25:00.644 Write (01h): Supported LBA-Change 00:25:00.644 Read (02h): Supported 00:25:00.644 Write Zeroes (08h): Supported LBA-Change 00:25:00.644 Dataset Management (09h): Supported 00:25:00.644 00:25:00.644 Error Log 00:25:00.644 ========= 00:25:00.644 Entry: 0 00:25:00.644 Error Count: 0x3 00:25:00.644 Submission Queue Id: 0x0 00:25:00.644 Command Id: 0x5 00:25:00.644 Phase Bit: 0 00:25:00.644 Status Code: 0x2 00:25:00.644 Status Code Type: 0x0 00:25:00.644 Do Not Retry: 1 00:25:00.644 Error Location: 0x28 00:25:00.644 LBA: 0x0 00:25:00.644 Namespace: 0x0 00:25:00.644 Vendor Log Page: 0x0 00:25:00.644 ----------- 00:25:00.644 Entry: 1 00:25:00.644 Error Count: 0x2 00:25:00.644 Submission Queue Id: 0x0 00:25:00.644 Command Id: 0x5 00:25:00.644 Phase Bit: 0 00:25:00.644 Status Code: 0x2 00:25:00.644 Status Code Type: 0x0 00:25:00.644 Do Not Retry: 1 00:25:00.644 Error Location: 0x28 00:25:00.644 LBA: 0x0 00:25:00.644 Namespace: 0x0 00:25:00.644 Vendor Log Page: 0x0 00:25:00.644 ----------- 00:25:00.644 Entry: 2 00:25:00.644 Error Count: 0x1 00:25:00.644 Submission Queue Id: 0x0 00:25:00.644 Command Id: 0x4 00:25:00.644 Phase Bit: 0 00:25:00.644 Status Code: 0x2 00:25:00.644 Status Code Type: 0x0 00:25:00.644 Do Not Retry: 1 00:25:00.644 Error Location: 0x28 00:25:00.644 LBA: 0x0 00:25:00.644 Namespace: 0x0 00:25:00.644 Vendor Log Page: 0x0 00:25:00.644 00:25:00.644 Number of Queues 00:25:00.644 ================ 00:25:00.644 Number of I/O Submission Queues: 128 00:25:00.644 Number of I/O Completion Queues: 128 00:25:00.644 00:25:00.644 ZNS Specific Controller Data 00:25:00.644 ============================ 00:25:00.644 Zone Append Size Limit: 0 00:25:00.644 00:25:00.644 00:25:00.644 Active Namespaces 00:25:00.644 ================= 00:25:00.644 get_feature(0x05) failed 00:25:00.644 Namespace ID:1 00:25:00.644 Command Set Identifier: NVM (00h) 00:25:00.644 Deallocate: Supported 00:25:00.644 Deallocated/Unwritten Error: Not Supported 00:25:00.644 Deallocated Read Value: Unknown 00:25:00.644 Deallocate in Write Zeroes: Not Supported 00:25:00.644 Deallocated Guard Field: 0xFFFF 00:25:00.644 Flush: Supported 00:25:00.644 Reservation: Not Supported 00:25:00.644 Namespace Sharing Capabilities: Multiple Controllers 00:25:00.644 Size (in LBAs): 1953525168 (931GiB) 00:25:00.644 Capacity (in LBAs): 1953525168 (931GiB) 00:25:00.644 Utilization (in LBAs): 1953525168 (931GiB) 00:25:00.644 UUID: 8d4c90eb-bc76-4667-abd3-e8b3fc7e2252 00:25:00.644 Thin Provisioning: Not Supported 00:25:00.644 Per-NS Atomic Units: Yes 00:25:00.644 Atomic Boundary Size (Normal): 0 00:25:00.644 Atomic Boundary Size (PFail): 0 00:25:00.644 Atomic Boundary Offset: 0 00:25:00.644 NGUID/EUI64 Never Reused: No 00:25:00.644 ANA group ID: 1 00:25:00.644 Namespace Write Protected: No 00:25:00.644 Number of LBA Formats: 1 00:25:00.644 Current LBA Format: LBA Format #00 00:25:00.644 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:00.644 00:25:00.644 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:00.644 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:00.644 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:00.644 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:00.644 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:00.645 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:00.645 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:00.645 rmmod nvme_tcp 00:25:00.645 rmmod nvme_fabrics 00:25:00.645 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:00.645 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:00.645 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:00.645 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:00.645 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:00.645 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:00.645 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:00.645 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:00.645 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:00.645 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:00.645 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:00.645 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:00.645 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:00.645 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.645 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.645 14:34:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.182 14:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:03.182 14:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:03.182 14:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:03.182 14:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:03.183 14:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:03.183 14:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:03.183 14:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:03.183 14:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:03.183 14:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:03.183 14:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:03.183 14:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:05.720 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:05.720 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:05.720 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:05.720 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:05.720 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:05.720 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:05.720 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:05.720 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:05.720 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:05.720 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:05.720 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:05.720 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:05.720 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:05.720 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:05.720 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:05.720 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:06.660 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:06.660 00:25:06.660 real 0m16.686s 00:25:06.660 user 0m4.368s 00:25:06.660 sys 0m8.708s 00:25:06.660 14:34:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:06.660 14:34:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.660 ************************************ 00:25:06.660 END TEST nvmf_identify_kernel_target 00:25:06.660 ************************************ 00:25:06.660 14:34:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:06.660 14:34:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:06.660 14:34:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:06.660 14:34:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.660 ************************************ 00:25:06.660 START TEST nvmf_auth_host 00:25:06.660 ************************************ 00:25:06.660 14:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:06.920 * Looking for test storage... 00:25:06.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:06.920 14:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:06.920 14:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:06.920 14:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:06.920 14:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:06.920 14:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:06.920 14:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:06.920 14:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:06.920 14:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:06.920 14:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:06.920 14:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:06.920 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:06.920 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:06.920 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:06.920 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:06.920 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:06.920 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:06.920 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:06.920 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:06.920 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:06.920 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:06.920 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:06.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.921 --rc genhtml_branch_coverage=1 00:25:06.921 --rc genhtml_function_coverage=1 00:25:06.921 --rc genhtml_legend=1 00:25:06.921 --rc geninfo_all_blocks=1 00:25:06.921 --rc geninfo_unexecuted_blocks=1 00:25:06.921 00:25:06.921 ' 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:06.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.921 --rc genhtml_branch_coverage=1 00:25:06.921 --rc genhtml_function_coverage=1 00:25:06.921 --rc genhtml_legend=1 00:25:06.921 --rc geninfo_all_blocks=1 00:25:06.921 --rc geninfo_unexecuted_blocks=1 00:25:06.921 00:25:06.921 ' 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:06.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.921 --rc genhtml_branch_coverage=1 00:25:06.921 --rc genhtml_function_coverage=1 00:25:06.921 --rc genhtml_legend=1 00:25:06.921 --rc geninfo_all_blocks=1 00:25:06.921 --rc geninfo_unexecuted_blocks=1 00:25:06.921 00:25:06.921 ' 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:06.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.921 --rc genhtml_branch_coverage=1 00:25:06.921 --rc genhtml_function_coverage=1 00:25:06.921 --rc genhtml_legend=1 00:25:06.921 --rc geninfo_all_blocks=1 00:25:06.921 --rc geninfo_unexecuted_blocks=1 00:25:06.921 00:25:06.921 ' 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:06.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:06.921 14:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.495 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.495 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:13.495 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:13.495 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:13.495 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:13.495 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:13.495 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:13.495 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:13.495 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:13.495 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:13.495 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:13.495 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:13.495 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:13.495 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:13.495 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:13.495 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:13.496 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:13.496 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:13.496 Found net devices under 0000:86:00.0: cvl_0_0 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:13.496 Found net devices under 0000:86:00.1: cvl_0_1 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:13.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:25:13.496 00:25:13.496 --- 10.0.0.2 ping statistics --- 00:25:13.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.496 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:25:13.496 00:25:13.496 --- 10.0.0.1 ping statistics --- 00:25:13.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.496 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1593370 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1593370 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1593370 ']' 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.496 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.497 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.497 14:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=063b64d90b288993f641a36ecdbfad97 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3rk 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 063b64d90b288993f641a36ecdbfad97 0 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 063b64d90b288993f641a36ecdbfad97 0 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=063b64d90b288993f641a36ecdbfad97 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3rk 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3rk 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.3rk 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2a10b6bd5357fcf2d43c77dc8bf89db48bb3473207ce39175dc8bc66f0fdfe45 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.d0W 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2a10b6bd5357fcf2d43c77dc8bf89db48bb3473207ce39175dc8bc66f0fdfe45 3 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2a10b6bd5357fcf2d43c77dc8bf89db48bb3473207ce39175dc8bc66f0fdfe45 3 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2a10b6bd5357fcf2d43c77dc8bf89db48bb3473207ce39175dc8bc66f0fdfe45 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.d0W 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.d0W 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.d0W 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6b3b9c650cf91f2592f0065331144c8fb191f13ff917027f 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.mJV 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6b3b9c650cf91f2592f0065331144c8fb191f13ff917027f 0 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6b3b9c650cf91f2592f0065331144c8fb191f13ff917027f 0 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6b3b9c650cf91f2592f0065331144c8fb191f13ff917027f 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.mJV 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.mJV 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.mJV 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3f5f3799f90c4e1f560cb7f557c534c86b5d49245d7b6971 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.MVV 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3f5f3799f90c4e1f560cb7f557c534c86b5d49245d7b6971 2 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3f5f3799f90c4e1f560cb7f557c534c86b5d49245d7b6971 2 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3f5f3799f90c4e1f560cb7f557c534c86b5d49245d7b6971 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.MVV 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.MVV 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.MVV 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5a1426de8487c523c80502635878985e 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.GyA 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5a1426de8487c523c80502635878985e 1 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5a1426de8487c523c80502635878985e 1 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5a1426de8487c523c80502635878985e 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.GyA 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.GyA 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.GyA 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:13.497 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e087dd7ef0200d890e786226d8ddbf68 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.BRT 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e087dd7ef0200d890e786226d8ddbf68 1 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e087dd7ef0200d890e786226d8ddbf68 1 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e087dd7ef0200d890e786226d8ddbf68 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.BRT 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.BRT 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.BRT 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d1057df112e5c09cdf09f1c64e279e9e11571f1857d03323 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.UV3 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d1057df112e5c09cdf09f1c64e279e9e11571f1857d03323 2 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d1057df112e5c09cdf09f1c64e279e9e11571f1857d03323 2 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d1057df112e5c09cdf09f1c64e279e9e11571f1857d03323 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.UV3 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.UV3 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.UV3 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f28363ab3880dbf3c549dd69db067a4b 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.GPL 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f28363ab3880dbf3c549dd69db067a4b 0 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f28363ab3880dbf3c549dd69db067a4b 0 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f28363ab3880dbf3c549dd69db067a4b 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:13.498 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.GPL 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.GPL 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.GPL 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5c85ca3c98fae428ab9b658bc0eaab7e7b3d581f56b099a938cbeb5ce76c21c4 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.kga 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5c85ca3c98fae428ab9b658bc0eaab7e7b3d581f56b099a938cbeb5ce76c21c4 3 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5c85ca3c98fae428ab9b658bc0eaab7e7b3d581f56b099a938cbeb5ce76c21c4 3 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5c85ca3c98fae428ab9b658bc0eaab7e7b3d581f56b099a938cbeb5ce76c21c4 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.kga 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.kga 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.kga 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1593370 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1593370 ']' 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.758 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.018 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.018 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:14.018 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:14.018 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3rk 00:25:14.018 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.018 14:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.018 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.018 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.d0W ]] 00:25:14.018 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.d0W 00:25:14.018 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.018 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.018 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.018 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:14.018 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.mJV 00:25:14.018 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.018 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.018 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.018 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.MVV ]] 00:25:14.018 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MVV 00:25:14.018 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.018 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.GyA 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.BRT ]] 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BRT 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.UV3 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.GPL ]] 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.GPL 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.kga 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:14.019 14:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:16.557 Waiting for block devices as requested 00:25:16.557 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:16.816 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:16.816 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:17.075 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:17.075 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:17.075 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:17.075 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:17.335 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:17.335 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:17.335 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:17.335 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:17.594 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:17.594 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:17.594 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:17.594 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:17.853 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:17.853 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:18.421 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:18.421 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:18.421 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:18.421 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:18.421 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:18.421 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:18.421 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:18.421 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:18.421 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:18.421 No valid GPT data, bailing 00:25:18.421 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:18.421 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:18.421 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:18.421 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:18.421 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:18.421 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:18.422 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:18.422 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:18.422 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:18.422 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:18.422 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:18.422 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:18.422 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:18.422 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:18.422 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:18.422 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:18.422 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:18.422 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:18.681 00:25:18.682 Discovery Log Number of Records 2, Generation counter 2 00:25:18.682 =====Discovery Log Entry 0====== 00:25:18.682 trtype: tcp 00:25:18.682 adrfam: ipv4 00:25:18.682 subtype: current discovery subsystem 00:25:18.682 treq: not specified, sq flow control disable supported 00:25:18.682 portid: 1 00:25:18.682 trsvcid: 4420 00:25:18.682 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:18.682 traddr: 10.0.0.1 00:25:18.682 eflags: none 00:25:18.682 sectype: none 00:25:18.682 =====Discovery Log Entry 1====== 00:25:18.682 trtype: tcp 00:25:18.682 adrfam: ipv4 00:25:18.682 subtype: nvme subsystem 00:25:18.682 treq: not specified, sq flow control disable supported 00:25:18.682 portid: 1 00:25:18.682 trsvcid: 4420 00:25:18.682 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:18.682 traddr: 10.0.0.1 00:25:18.682 eflags: none 00:25:18.682 sectype: none 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: ]] 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.682 nvme0n1 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.682 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.975 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.975 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.975 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.975 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.975 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.975 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:18.975 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:18.975 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.975 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:18.975 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.975 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.975 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:18.975 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: ]] 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.976 14:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.976 nvme0n1 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: ]] 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.976 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.303 nvme0n1 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: ]] 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.303 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.604 nvme0n1 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: ]] 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.604 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.605 nvme0n1 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.605 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.881 nvme0n1 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.881 14:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.881 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.881 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.881 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.881 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.881 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.881 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.881 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:19.881 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.881 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:19.881 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.881 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.881 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:19.881 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:19.881 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:19.881 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:19.881 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.881 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:19.881 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:19.881 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: ]] 00:25:19.881 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.882 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.158 nvme0n1 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: ]] 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:20.158 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.159 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:20.159 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.159 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.159 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.159 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.159 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.159 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.159 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.159 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.159 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.159 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.159 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.159 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.159 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.159 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.159 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:20.159 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.159 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.420 nvme0n1 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: ]] 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.420 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.678 nvme0n1 00:25:20.678 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.678 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.678 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.678 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.678 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.678 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.678 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.678 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.678 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.678 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.678 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.678 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: ]] 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.679 14:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.938 nvme0n1 00:25:20.938 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.938 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.938 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.938 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.938 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.938 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.938 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.938 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.938 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.938 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.938 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.938 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.938 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:20.938 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.938 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.938 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.938 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:20.938 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.939 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.198 nvme0n1 00:25:21.198 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.198 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.198 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.198 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.198 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.198 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.198 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.198 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.198 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.198 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.198 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.198 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.198 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.198 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: ]] 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.199 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.459 nvme0n1 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: ]] 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.459 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.719 nvme0n1 00:25:21.719 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.719 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.719 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.719 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.719 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.719 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.719 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.719 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.719 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: ]] 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.979 14:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.238 nvme0n1 00:25:22.238 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: ]] 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.239 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.499 nvme0n1 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.499 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.500 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.500 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.500 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.500 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.500 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.500 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.500 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.500 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:22.500 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.500 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.760 nvme0n1 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: ]] 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.760 14:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.329 nvme0n1 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: ]] 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.329 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.588 nvme0n1 00:25:23.588 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.588 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.588 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.588 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.588 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.588 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: ]] 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.847 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.848 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.848 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.848 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.848 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.848 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.848 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.848 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.848 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.848 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.848 14:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.108 nvme0n1 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: ]] 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.108 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.678 nvme0n1 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.678 14:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.937 nvme0n1 00:25:24.937 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.937 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.937 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.937 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.937 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.937 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: ]] 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.197 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.766 nvme0n1 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: ]] 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.766 14:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.335 nvme0n1 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: ]] 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.335 14:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.903 nvme0n1 00:25:26.903 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.903 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.903 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.903 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.903 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.903 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.903 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.903 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.903 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.903 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: ]] 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.163 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.732 nvme0n1 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.732 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.733 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.733 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.733 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.733 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:27.733 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.733 14:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.301 nvme0n1 00:25:28.301 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.301 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.301 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.301 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.301 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: ]] 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.302 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.562 nvme0n1 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: ]] 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.562 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.822 nvme0n1 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: ]] 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.822 14:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.081 nvme0n1 00:25:29.081 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.081 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.081 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.081 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.081 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.081 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.081 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.081 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.081 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: ]] 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.082 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.341 nvme0n1 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.341 nvme0n1 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.341 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: ]] 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.600 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.601 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.601 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.601 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.601 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.601 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.601 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.601 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.601 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.601 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.601 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.601 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:29.601 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.601 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.601 nvme0n1 00:25:29.601 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.601 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.601 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.601 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.601 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.601 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: ]] 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.860 14:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.860 nvme0n1 00:25:29.860 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.860 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.860 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.860 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.860 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.860 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.118 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.118 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.118 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.118 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.118 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.118 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.118 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:30.118 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.118 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.118 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.118 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:30.118 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:30.118 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:30.118 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.118 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.118 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:30.118 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: ]] 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.119 nvme0n1 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.119 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: ]] 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.376 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.377 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.377 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.377 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.377 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:30.377 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.377 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.377 nvme0n1 00:25:30.377 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.377 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.377 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.377 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.377 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.377 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.377 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.377 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.377 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.377 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.635 nvme0n1 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.635 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: ]] 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.893 14:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.152 nvme0n1 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: ]] 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:31.152 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.153 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.412 nvme0n1 00:25:31.412 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.412 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.412 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.412 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.412 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.412 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.412 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.412 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.412 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.412 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.412 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.412 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.412 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:31.412 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.412 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.412 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.412 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.412 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:31.412 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:31.412 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.412 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: ]] 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.413 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.672 nvme0n1 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: ]] 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.672 14:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.931 nvme0n1 00:25:31.931 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.931 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.931 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.931 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.931 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.931 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.931 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.931 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.931 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.931 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.190 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.450 nvme0n1 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: ]] 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.450 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.709 nvme0n1 00:25:32.709 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.709 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.709 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.709 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.709 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.709 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.709 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.709 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.709 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.709 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.967 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.967 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.967 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:32.967 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.967 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.967 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:32.967 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: ]] 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.968 14:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.226 nvme0n1 00:25:33.226 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.226 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.226 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: ]] 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.227 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.795 nvme0n1 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: ]] 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.795 14:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.053 nvme0n1 00:25:34.053 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.053 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.053 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.053 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.053 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.053 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.313 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.573 nvme0n1 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: ]] 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.573 14:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.140 nvme0n1 00:25:35.140 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.140 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.141 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.141 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.141 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.399 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.399 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.399 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.399 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.399 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.399 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.399 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.399 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:35.399 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.399 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.399 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:35.399 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:35.399 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:35.399 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:35.399 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.399 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:35.399 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:35.399 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: ]] 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.400 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.968 nvme0n1 00:25:35.968 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.968 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.968 14:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: ]] 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.968 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.536 nvme0n1 00:25:36.536 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.536 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.536 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.536 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: ]] 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.537 14:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.104 nvme0n1 00:25:37.104 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.104 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.104 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.104 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.104 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.104 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.104 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.104 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.104 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.104 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.363 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.931 nvme0n1 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:37.931 14:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: ]] 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.931 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.190 nvme0n1 00:25:38.190 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.190 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.190 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.190 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.190 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.190 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.190 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.190 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.190 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.190 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.190 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.190 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.190 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:38.190 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.190 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.190 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.190 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:38.190 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:38.190 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:38.190 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.190 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: ]] 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.191 nvme0n1 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.191 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: ]] 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.450 nvme0n1 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.450 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: ]] 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.710 nvme0n1 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.710 14:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.970 nvme0n1 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: ]] 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.970 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.229 nvme0n1 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: ]] 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.229 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.230 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.230 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.230 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.230 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.230 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.230 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.230 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.230 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.230 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.230 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.230 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.489 nvme0n1 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: ]] 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.489 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.748 nvme0n1 00:25:39.748 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.748 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.748 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.748 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.748 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.748 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.748 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.748 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.748 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: ]] 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.749 14:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.008 nvme0n1 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.008 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.267 nvme0n1 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: ]] 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.267 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.526 nvme0n1 00:25:40.526 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.526 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.526 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.526 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.526 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.526 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.526 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.526 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.526 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.526 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.785 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: ]] 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.786 14:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.045 nvme0n1 00:25:41.045 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.045 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.045 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.045 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.045 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.045 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.045 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.045 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.045 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: ]] 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.046 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.305 nvme0n1 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: ]] 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.305 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.564 nvme0n1 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.564 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.565 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:41.565 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:41.565 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.565 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:41.565 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.565 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.565 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.565 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.565 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.565 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.565 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.565 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.565 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.565 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.565 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.565 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.565 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.565 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.565 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:41.565 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.565 14:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.823 nvme0n1 00:25:41.823 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.823 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.823 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.823 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.823 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.823 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.082 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.082 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.082 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.082 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.082 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.082 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: ]] 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.083 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.342 nvme0n1 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: ]] 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.342 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.911 nvme0n1 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: ]] 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.911 14:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.170 nvme0n1 00:25:43.170 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.170 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.170 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.170 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.170 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.170 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.170 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.170 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.170 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.170 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: ]] 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.429 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.430 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.430 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:43.430 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.430 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.689 nvme0n1 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.689 14:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.257 nvme0n1 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYzYjY0ZDkwYjI4ODk5M2Y2NDFhMzZlY2RiZmFkOTdBwfKM: 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: ]] 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmExMGI2YmQ1MzU3ZmNmMmQ0M2M3N2RjOGJmODlkYjQ4YmIzNDczMjA3Y2UzOTE3NWRjOGJjNjZmMGZkZmU0Ndi93/M=: 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.257 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.825 nvme0n1 00:25:44.825 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.825 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.825 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.825 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.825 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.825 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.825 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.825 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.825 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.825 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.825 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.825 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.825 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:44.825 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.825 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.825 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:44.825 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: ]] 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.826 14:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.394 nvme0n1 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: ]] 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.394 14:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.330 nvme0n1 00:25:46.330 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.330 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.330 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.330 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.330 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.330 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.330 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.330 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDEwNTdkZjExMmU1YzA5Y2RmMDlmMWM2NGUyNzllOWUxMTU3MWYxODU3ZDAzMzIzCkPk1A==: 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: ]] 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI4MzYzYWIzODgwZGJmM2M1NDlkZDY5ZGIwNjdhNGLv1p41: 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.331 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.898 nvme0n1 00:25:46.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:46.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:46.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:46.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:46.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:46.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM4NWNhM2M5OGZhZTQyOGFiOWI2NThiYzBlYWFiN2U3YjNkNTgxZjU2YjA5OWE5MzhjYmViNWNlNzZjMjFjNC4MV3A=: 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.899 14:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.467 nvme0n1 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: ]] 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.467 request: 00:25:47.467 { 00:25:47.467 "name": "nvme0", 00:25:47.467 "trtype": "tcp", 00:25:47.467 "traddr": "10.0.0.1", 00:25:47.467 "adrfam": "ipv4", 00:25:47.467 "trsvcid": "4420", 00:25:47.467 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:47.467 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:47.467 "prchk_reftag": false, 00:25:47.467 "prchk_guard": false, 00:25:47.467 "hdgst": false, 00:25:47.467 "ddgst": false, 00:25:47.467 "allow_unrecognized_csi": false, 00:25:47.467 "method": "bdev_nvme_attach_controller", 00:25:47.467 "req_id": 1 00:25:47.467 } 00:25:47.467 Got JSON-RPC error response 00:25:47.467 response: 00:25:47.467 { 00:25:47.467 "code": -5, 00:25:47.467 "message": "Input/output error" 00:25:47.467 } 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.467 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.727 request: 00:25:47.727 { 00:25:47.727 "name": "nvme0", 00:25:47.727 "trtype": "tcp", 00:25:47.727 "traddr": "10.0.0.1", 00:25:47.727 "adrfam": "ipv4", 00:25:47.727 "trsvcid": "4420", 00:25:47.727 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:47.727 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:47.727 "prchk_reftag": false, 00:25:47.727 "prchk_guard": false, 00:25:47.727 "hdgst": false, 00:25:47.727 "ddgst": false, 00:25:47.727 "dhchap_key": "key2", 00:25:47.727 "allow_unrecognized_csi": false, 00:25:47.727 "method": "bdev_nvme_attach_controller", 00:25:47.727 "req_id": 1 00:25:47.727 } 00:25:47.727 Got JSON-RPC error response 00:25:47.727 response: 00:25:47.727 { 00:25:47.727 "code": -5, 00:25:47.727 "message": "Input/output error" 00:25:47.727 } 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.727 request: 00:25:47.727 { 00:25:47.727 "name": "nvme0", 00:25:47.727 "trtype": "tcp", 00:25:47.727 "traddr": "10.0.0.1", 00:25:47.727 "adrfam": "ipv4", 00:25:47.727 "trsvcid": "4420", 00:25:47.727 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:47.727 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:47.727 "prchk_reftag": false, 00:25:47.727 "prchk_guard": false, 00:25:47.727 "hdgst": false, 00:25:47.727 "ddgst": false, 00:25:47.727 "dhchap_key": "key1", 00:25:47.727 "dhchap_ctrlr_key": "ckey2", 00:25:47.727 "allow_unrecognized_csi": false, 00:25:47.727 "method": "bdev_nvme_attach_controller", 00:25:47.727 "req_id": 1 00:25:47.727 } 00:25:47.727 Got JSON-RPC error response 00:25:47.727 response: 00:25:47.727 { 00:25:47.727 "code": -5, 00:25:47.727 "message": "Input/output error" 00:25:47.727 } 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.727 14:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.986 nvme0n1 00:25:47.986 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.986 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:47.986 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.986 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:47.986 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:47.986 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:47.986 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:47.986 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:47.986 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:47.986 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:47.986 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:47.986 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: ]] 00:25:47.986 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:47.986 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:47.986 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.986 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.986 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.986 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.986 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:47.986 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.986 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.987 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.987 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.987 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:47.987 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:47.987 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:47.987 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:47.987 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.987 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:47.987 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.987 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:47.987 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.987 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.987 request: 00:25:47.987 { 00:25:47.987 "name": "nvme0", 00:25:47.987 "dhchap_key": "key1", 00:25:47.987 "dhchap_ctrlr_key": "ckey2", 00:25:47.987 "method": "bdev_nvme_set_keys", 00:25:47.987 "req_id": 1 00:25:47.987 } 00:25:47.987 Got JSON-RPC error response 00:25:47.987 response: 00:25:47.987 { 00:25:47.987 "code": -13, 00:25:47.987 "message": "Permission denied" 00:25:47.987 } 00:25:47.987 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:47.987 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:47.987 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:47.987 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:47.987 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:47.987 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.987 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:47.987 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.987 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.245 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.245 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:48.245 14:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:49.181 14:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.181 14:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:49.181 14:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.181 14:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.181 14:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.181 14:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:49.181 14:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:50.126 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.126 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmIzYjljNjUwY2Y5MWYyNTkyZjAwNjUzMzExNDRjOGZiMTkxZjEzZmY5MTcwMjdmdQq55w==: 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: ]] 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3OTlmOTBjNGUxZjU2MGNiN2Y1NTdjNTM0Yzg2YjVkNDkyNDVkN2I2OTcxN3qNFg==: 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.127 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.388 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:50.388 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.388 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.388 nvme0n1 00:25:50.388 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.388 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:50.388 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.388 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWExNDI2ZGU4NDg3YzUyM2M4MDUwMjYzNTg3ODk4NWWWfnbu: 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: ]] 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTA4N2RkN2VmMDIwMGQ4OTBlNzg2MjI2ZDhkZGJmNjjhCeqn: 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.389 request: 00:25:50.389 { 00:25:50.389 "name": "nvme0", 00:25:50.389 "dhchap_key": "key2", 00:25:50.389 "dhchap_ctrlr_key": "ckey1", 00:25:50.389 "method": "bdev_nvme_set_keys", 00:25:50.389 "req_id": 1 00:25:50.389 } 00:25:50.389 Got JSON-RPC error response 00:25:50.389 response: 00:25:50.389 { 00:25:50.389 "code": -13, 00:25:50.389 "message": "Permission denied" 00:25:50.389 } 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.389 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.647 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:50.647 14:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:51.586 rmmod nvme_tcp 00:25:51.586 rmmod nvme_fabrics 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1593370 ']' 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1593370 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1593370 ']' 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1593370 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1593370 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1593370' 00:25:51.586 killing process with pid 1593370 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1593370 00:25:51.586 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1593370 00:25:51.846 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:51.846 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:51.846 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:51.846 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:51.846 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:51.846 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:51.846 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:51.846 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:51.846 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:51.846 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.846 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:51.846 14:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.752 14:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:53.752 14:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:53.752 14:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:54.012 14:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:54.012 14:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:54.012 14:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:54.012 14:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:54.012 14:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:54.012 14:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:54.012 14:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:54.012 14:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:54.012 14:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:54.012 14:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:57.304 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:57.304 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:57.304 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:57.304 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:57.304 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:57.304 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:57.304 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:57.304 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:57.304 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:57.304 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:57.304 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:57.304 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:57.304 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:57.304 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:57.304 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:57.304 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:57.873 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:57.873 14:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.3rk /tmp/spdk.key-null.mJV /tmp/spdk.key-sha256.GyA /tmp/spdk.key-sha384.UV3 /tmp/spdk.key-sha512.kga /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:57.873 14:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:00.583 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:00.583 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:00.583 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:00.583 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:00.583 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:00.583 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:00.583 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:00.583 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:00.583 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:00.583 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:00.583 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:00.583 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:00.583 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:00.583 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:00.583 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:00.583 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:00.583 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:00.840 00:26:00.840 real 0m54.032s 00:26:00.840 user 0m48.568s 00:26:00.840 sys 0m12.762s 00:26:00.840 14:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:00.840 14:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.840 ************************************ 00:26:00.840 END TEST nvmf_auth_host 00:26:00.840 ************************************ 00:26:00.840 14:35:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:00.840 14:35:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:00.840 14:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:00.840 14:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:00.840 14:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.840 ************************************ 00:26:00.840 START TEST nvmf_digest 00:26:00.840 ************************************ 00:26:00.840 14:35:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:00.840 * Looking for test storage... 00:26:00.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:00.840 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:00.840 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:26:00.840 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:01.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.099 --rc genhtml_branch_coverage=1 00:26:01.099 --rc genhtml_function_coverage=1 00:26:01.099 --rc genhtml_legend=1 00:26:01.099 --rc geninfo_all_blocks=1 00:26:01.099 --rc geninfo_unexecuted_blocks=1 00:26:01.099 00:26:01.099 ' 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:01.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.099 --rc genhtml_branch_coverage=1 00:26:01.099 --rc genhtml_function_coverage=1 00:26:01.099 --rc genhtml_legend=1 00:26:01.099 --rc geninfo_all_blocks=1 00:26:01.099 --rc geninfo_unexecuted_blocks=1 00:26:01.099 00:26:01.099 ' 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:01.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.099 --rc genhtml_branch_coverage=1 00:26:01.099 --rc genhtml_function_coverage=1 00:26:01.099 --rc genhtml_legend=1 00:26:01.099 --rc geninfo_all_blocks=1 00:26:01.099 --rc geninfo_unexecuted_blocks=1 00:26:01.099 00:26:01.099 ' 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:01.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.099 --rc genhtml_branch_coverage=1 00:26:01.099 --rc genhtml_function_coverage=1 00:26:01.099 --rc genhtml_legend=1 00:26:01.099 --rc geninfo_all_blocks=1 00:26:01.099 --rc geninfo_unexecuted_blocks=1 00:26:01.099 00:26:01.099 ' 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:01.099 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:01.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:01.100 14:35:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:07.674 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:07.674 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:07.674 Found net devices under 0000:86:00.0: cvl_0_0 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.674 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:07.674 Found net devices under 0000:86:00.1: cvl_0_1 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:07.675 14:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:07.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:07.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:26:07.675 00:26:07.675 --- 10.0.0.2 ping statistics --- 00:26:07.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.675 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:07.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:07.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:26:07.675 00:26:07.675 --- 10.0.0.1 ping statistics --- 00:26:07.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.675 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:07.675 ************************************ 00:26:07.675 START TEST nvmf_digest_clean 00:26:07.675 ************************************ 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1607315 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1607315 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1607315 ']' 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:07.675 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:07.675 [2024-11-17 14:35:56.144059] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:26:07.675 [2024-11-17 14:35:56.144100] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:07.675 [2024-11-17 14:35:56.223847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.675 [2024-11-17 14:35:56.265689] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:07.675 [2024-11-17 14:35:56.265722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:07.675 [2024-11-17 14:35:56.265729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:07.675 [2024-11-17 14:35:56.265735] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:07.675 [2024-11-17 14:35:56.265740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:07.675 [2024-11-17 14:35:56.266303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.935 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:07.935 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:07.935 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:07.935 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:07.935 14:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:07.935 null0 00:26:07.935 [2024-11-17 14:35:57.092528] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.935 [2024-11-17 14:35:57.116713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1607378 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1607378 /var/tmp/bperf.sock 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1607378 ']' 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:07.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:07.935 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:08.195 [2024-11-17 14:35:57.172085] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:26:08.195 [2024-11-17 14:35:57.172128] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1607378 ] 00:26:08.195 [2024-11-17 14:35:57.247436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.195 [2024-11-17 14:35:57.290527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.195 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:08.195 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:08.195 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:08.195 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:08.195 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:08.453 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:08.453 14:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:09.021 nvme0n1 00:26:09.021 14:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:09.021 14:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:09.021 Running I/O for 2 seconds... 00:26:11.334 25748.00 IOPS, 100.58 MiB/s [2024-11-17T13:36:00.559Z] 25388.50 IOPS, 99.17 MiB/s 00:26:11.334 Latency(us) 00:26:11.334 [2024-11-17T13:36:00.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.334 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:11.334 nvme0n1 : 2.00 25392.74 99.19 0.00 0.00 5035.10 2635.69 15158.76 00:26:11.334 [2024-11-17T13:36:00.559Z] =================================================================================================================== 00:26:11.334 [2024-11-17T13:36:00.559Z] Total : 25392.74 99.19 0.00 0.00 5035.10 2635.69 15158.76 00:26:11.334 { 00:26:11.334 "results": [ 00:26:11.334 { 00:26:11.334 "job": "nvme0n1", 00:26:11.334 "core_mask": "0x2", 00:26:11.334 "workload": "randread", 00:26:11.334 "status": "finished", 00:26:11.334 "queue_depth": 128, 00:26:11.334 "io_size": 4096, 00:26:11.334 "runtime": 2.003368, 00:26:11.334 "iops": 25392.738628150197, 00:26:11.334 "mibps": 99.1903852662117, 00:26:11.334 "io_failed": 0, 00:26:11.334 "io_timeout": 0, 00:26:11.334 "avg_latency_us": 5035.097811019004, 00:26:11.334 "min_latency_us": 2635.686956521739, 00:26:11.334 "max_latency_us": 15158.761739130436 00:26:11.334 } 00:26:11.334 ], 00:26:11.334 "core_count": 1 00:26:11.334 } 00:26:11.334 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:11.334 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:11.334 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:11.334 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:11.334 | select(.opcode=="crc32c") 00:26:11.334 | "\(.module_name) \(.executed)"' 00:26:11.334 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:11.334 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:11.334 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:11.334 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:11.334 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:11.334 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1607378 00:26:11.334 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1607378 ']' 00:26:11.334 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1607378 00:26:11.334 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:11.334 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:11.334 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1607378 00:26:11.334 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:11.334 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:11.334 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1607378' 00:26:11.334 killing process with pid 1607378 00:26:11.334 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1607378 00:26:11.334 Received shutdown signal, test time was about 2.000000 seconds 00:26:11.334 00:26:11.334 Latency(us) 00:26:11.334 [2024-11-17T13:36:00.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.334 [2024-11-17T13:36:00.559Z] =================================================================================================================== 00:26:11.334 [2024-11-17T13:36:00.560Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:11.335 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1607378 00:26:11.594 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:11.594 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:11.594 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:11.594 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:11.594 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:11.594 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:11.594 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:11.594 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1608081 00:26:11.594 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1608081 /var/tmp/bperf.sock 00:26:11.594 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:11.594 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1608081 ']' 00:26:11.594 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:11.594 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:11.594 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:11.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:11.594 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:11.594 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.594 [2024-11-17 14:36:00.633095] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:26:11.594 [2024-11-17 14:36:00.633144] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1608081 ] 00:26:11.594 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:11.594 Zero copy mechanism will not be used. 00:26:11.594 [2024-11-17 14:36:00.709031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.594 [2024-11-17 14:36:00.751130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.594 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.594 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:11.594 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:11.594 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:11.594 14:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:11.853 14:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:11.853 14:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.420 nvme0n1 00:26:12.420 14:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:12.420 14:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:12.420 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:12.420 Zero copy mechanism will not be used. 00:26:12.420 Running I/O for 2 seconds... 00:26:14.293 5531.00 IOPS, 691.38 MiB/s [2024-11-17T13:36:03.518Z] 5481.00 IOPS, 685.12 MiB/s 00:26:14.293 Latency(us) 00:26:14.293 [2024-11-17T13:36:03.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.293 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:14.293 nvme0n1 : 2.01 5474.52 684.31 0.00 0.00 2919.93 662.48 5841.25 00:26:14.293 [2024-11-17T13:36:03.518Z] =================================================================================================================== 00:26:14.293 [2024-11-17T13:36:03.518Z] Total : 5474.52 684.31 0.00 0.00 2919.93 662.48 5841.25 00:26:14.293 { 00:26:14.293 "results": [ 00:26:14.293 { 00:26:14.293 "job": "nvme0n1", 00:26:14.293 "core_mask": "0x2", 00:26:14.293 "workload": "randread", 00:26:14.293 "status": "finished", 00:26:14.293 "queue_depth": 16, 00:26:14.293 "io_size": 131072, 00:26:14.293 "runtime": 2.00529, 00:26:14.293 "iops": 5474.519894878047, 00:26:14.293 "mibps": 684.3149868597559, 00:26:14.293 "io_failed": 0, 00:26:14.293 "io_timeout": 0, 00:26:14.293 "avg_latency_us": 2919.9311292941616, 00:26:14.293 "min_latency_us": 662.4834782608696, 00:26:14.293 "max_latency_us": 5841.252173913043 00:26:14.293 } 00:26:14.293 ], 00:26:14.293 "core_count": 1 00:26:14.293 } 00:26:14.551 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:14.551 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:14.551 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:14.551 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:14.551 | select(.opcode=="crc32c") 00:26:14.551 | "\(.module_name) \(.executed)"' 00:26:14.551 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:14.551 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:14.551 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:14.551 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:14.551 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:14.551 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1608081 00:26:14.551 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1608081 ']' 00:26:14.551 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1608081 00:26:14.551 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:14.551 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:14.551 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1608081 00:26:14.551 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:14.551 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:14.551 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1608081' 00:26:14.551 killing process with pid 1608081 00:26:14.552 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1608081 00:26:14.552 Received shutdown signal, test time was about 2.000000 seconds 00:26:14.552 00:26:14.552 Latency(us) 00:26:14.552 [2024-11-17T13:36:03.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.552 [2024-11-17T13:36:03.777Z] =================================================================================================================== 00:26:14.552 [2024-11-17T13:36:03.777Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:14.552 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1608081 00:26:14.811 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:14.811 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:14.811 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:14.811 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:14.811 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:14.811 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:14.811 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:14.811 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1608636 00:26:14.811 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1608636 /var/tmp/bperf.sock 00:26:14.811 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:14.811 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1608636 ']' 00:26:14.811 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:14.811 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:14.811 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:14.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:14.811 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:14.811 14:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:14.811 [2024-11-17 14:36:03.971727] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:26:14.811 [2024-11-17 14:36:03.971776] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1608636 ] 00:26:15.070 [2024-11-17 14:36:04.048826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.070 [2024-11-17 14:36:04.086537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.070 14:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.070 14:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:15.070 14:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:15.070 14:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:15.070 14:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:15.329 14:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.329 14:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.588 nvme0n1 00:26:15.588 14:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:15.588 14:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:15.847 Running I/O for 2 seconds... 00:26:17.719 27812.00 IOPS, 108.64 MiB/s [2024-11-17T13:36:06.944Z] 27870.00 IOPS, 108.87 MiB/s 00:26:17.720 Latency(us) 00:26:17.720 [2024-11-17T13:36:06.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.720 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:17.720 nvme0n1 : 2.01 27872.93 108.88 0.00 0.00 4586.04 2065.81 7351.43 00:26:17.720 [2024-11-17T13:36:06.945Z] =================================================================================================================== 00:26:17.720 [2024-11-17T13:36:06.945Z] Total : 27872.93 108.88 0.00 0.00 4586.04 2065.81 7351.43 00:26:17.720 { 00:26:17.720 "results": [ 00:26:17.720 { 00:26:17.720 "job": "nvme0n1", 00:26:17.720 "core_mask": "0x2", 00:26:17.720 "workload": "randwrite", 00:26:17.720 "status": "finished", 00:26:17.720 "queue_depth": 128, 00:26:17.720 "io_size": 4096, 00:26:17.720 "runtime": 2.006678, 00:26:17.720 "iops": 27872.932279120017, 00:26:17.720 "mibps": 108.87864171531257, 00:26:17.720 "io_failed": 0, 00:26:17.720 "io_timeout": 0, 00:26:17.720 "avg_latency_us": 4586.041227453212, 00:26:17.720 "min_latency_us": 2065.808695652174, 00:26:17.720 "max_latency_us": 7351.429565217391 00:26:17.720 } 00:26:17.720 ], 00:26:17.720 "core_count": 1 00:26:17.720 } 00:26:17.720 14:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:17.720 14:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:17.720 14:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:17.720 14:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:17.720 | select(.opcode=="crc32c") 00:26:17.720 | "\(.module_name) \(.executed)"' 00:26:17.720 14:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:17.979 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:17.979 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:17.979 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:17.979 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:17.979 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1608636 00:26:17.979 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1608636 ']' 00:26:17.979 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1608636 00:26:17.979 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:17.979 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:17.979 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1608636 00:26:17.979 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:17.979 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:17.979 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1608636' 00:26:17.979 killing process with pid 1608636 00:26:17.979 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1608636 00:26:17.979 Received shutdown signal, test time was about 2.000000 seconds 00:26:17.979 00:26:17.979 Latency(us) 00:26:17.979 [2024-11-17T13:36:07.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.979 [2024-11-17T13:36:07.204Z] =================================================================================================================== 00:26:17.979 [2024-11-17T13:36:07.204Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:17.979 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1608636 00:26:18.239 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:18.239 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:18.239 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:18.239 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:18.239 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:18.239 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:18.239 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:18.239 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1609640 00:26:18.239 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1609640 /var/tmp/bperf.sock 00:26:18.239 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:18.239 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1609640 ']' 00:26:18.239 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:18.239 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:18.239 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:18.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:18.239 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:18.239 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:18.239 [2024-11-17 14:36:07.399087] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:26:18.239 [2024-11-17 14:36:07.399138] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609640 ] 00:26:18.239 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:18.239 Zero copy mechanism will not be used. 00:26:18.498 [2024-11-17 14:36:07.476201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.498 [2024-11-17 14:36:07.520234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.498 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:18.498 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:18.498 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:18.498 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:18.498 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:18.758 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:18.758 14:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.017 nvme0n1 00:26:19.017 14:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:19.017 14:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:19.017 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:19.017 Zero copy mechanism will not be used. 00:26:19.017 Running I/O for 2 seconds... 00:26:20.962 6237.00 IOPS, 779.62 MiB/s [2024-11-17T13:36:10.187Z] 6522.50 IOPS, 815.31 MiB/s 00:26:20.962 Latency(us) 00:26:20.962 [2024-11-17T13:36:10.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.962 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:20.962 nvme0n1 : 2.00 6521.37 815.17 0.00 0.00 2449.53 1823.61 8377.21 00:26:20.962 [2024-11-17T13:36:10.187Z] =================================================================================================================== 00:26:20.962 [2024-11-17T13:36:10.187Z] Total : 6521.37 815.17 0.00 0.00 2449.53 1823.61 8377.21 00:26:21.221 { 00:26:21.221 "results": [ 00:26:21.221 { 00:26:21.221 "job": "nvme0n1", 00:26:21.221 "core_mask": "0x2", 00:26:21.221 "workload": "randwrite", 00:26:21.221 "status": "finished", 00:26:21.221 "queue_depth": 16, 00:26:21.221 "io_size": 131072, 00:26:21.221 "runtime": 2.002799, 00:26:21.221 "iops": 6521.37333801345, 00:26:21.221 "mibps": 815.1716672516812, 00:26:21.221 "io_failed": 0, 00:26:21.221 "io_timeout": 0, 00:26:21.221 "avg_latency_us": 2449.5318877641034, 00:26:21.221 "min_latency_us": 1823.6104347826088, 00:26:21.221 "max_latency_us": 8377.210434782608 00:26:21.221 } 00:26:21.221 ], 00:26:21.221 "core_count": 1 00:26:21.221 } 00:26:21.221 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:21.221 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:21.221 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:21.221 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:21.221 | select(.opcode=="crc32c") 00:26:21.221 | "\(.module_name) \(.executed)"' 00:26:21.221 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:21.221 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:21.221 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:21.221 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:21.221 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:21.221 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1609640 00:26:21.221 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1609640 ']' 00:26:21.221 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1609640 00:26:21.221 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:21.221 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:21.221 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1609640 00:26:21.480 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:21.481 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:21.481 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1609640' 00:26:21.481 killing process with pid 1609640 00:26:21.481 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1609640 00:26:21.481 Received shutdown signal, test time was about 2.000000 seconds 00:26:21.481 00:26:21.481 Latency(us) 00:26:21.481 [2024-11-17T13:36:10.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.481 [2024-11-17T13:36:10.706Z] =================================================================================================================== 00:26:21.481 [2024-11-17T13:36:10.706Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:21.481 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1609640 00:26:21.481 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1607315 00:26:21.481 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1607315 ']' 00:26:21.481 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1607315 00:26:21.481 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:21.481 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:21.481 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1607315 00:26:21.481 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:21.481 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:21.481 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1607315' 00:26:21.481 killing process with pid 1607315 00:26:21.481 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1607315 00:26:21.481 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1607315 00:26:21.741 00:26:21.741 real 0m14.758s 00:26:21.741 user 0m27.796s 00:26:21.741 sys 0m4.650s 00:26:21.741 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:21.741 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:21.741 ************************************ 00:26:21.741 END TEST nvmf_digest_clean 00:26:21.741 ************************************ 00:26:21.741 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:21.741 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:21.741 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:21.741 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:21.741 ************************************ 00:26:21.741 START TEST nvmf_digest_error 00:26:21.741 ************************************ 00:26:21.741 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:21.741 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:21.741 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:21.741 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:21.741 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.741 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1610217 00:26:21.741 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1610217 00:26:21.741 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:21.741 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1610217 ']' 00:26:21.741 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.741 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:21.741 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.741 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:21.741 14:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:22.000 [2024-11-17 14:36:10.976502] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:26:22.000 [2024-11-17 14:36:10.976549] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.000 [2024-11-17 14:36:11.055039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.000 [2024-11-17 14:36:11.090999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:22.000 [2024-11-17 14:36:11.091032] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:22.000 [2024-11-17 14:36:11.091039] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:22.000 [2024-11-17 14:36:11.091045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:22.000 [2024-11-17 14:36:11.091050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:22.000 [2024-11-17 14:36:11.091620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.000 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:22.000 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:22.000 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:22.000 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:22.001 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:22.001 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:22.001 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:22.001 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.001 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:22.001 [2024-11-17 14:36:11.176090] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:22.001 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.001 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:22.001 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:22.001 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.001 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:22.260 null0 00:26:22.260 [2024-11-17 14:36:11.266441] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.260 [2024-11-17 14:36:11.290619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.260 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.260 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:22.260 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:22.260 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:22.260 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:22.260 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:22.260 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1610240 00:26:22.260 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1610240 /var/tmp/bperf.sock 00:26:22.260 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:22.260 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1610240 ']' 00:26:22.260 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:22.260 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:22.260 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:22.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:22.260 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:22.260 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:22.260 [2024-11-17 14:36:11.344031] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:26:22.260 [2024-11-17 14:36:11.344072] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610240 ] 00:26:22.260 [2024-11-17 14:36:11.419783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.260 [2024-11-17 14:36:11.462257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.520 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:22.520 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:22.520 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:22.520 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:22.779 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:22.779 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.779 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:22.779 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.779 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.779 14:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:23.039 nvme0n1 00:26:23.039 14:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:23.039 14:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.039 14:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:23.039 14:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.039 14:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:23.039 14:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:23.039 Running I/O for 2 seconds... 00:26:23.039 [2024-11-17 14:36:12.195625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.039 [2024-11-17 14:36:12.195658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.039 [2024-11-17 14:36:12.195672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.039 [2024-11-17 14:36:12.204856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.039 [2024-11-17 14:36:12.204879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.039 [2024-11-17 14:36:12.204887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.039 [2024-11-17 14:36:12.215659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.039 [2024-11-17 14:36:12.215681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.039 [2024-11-17 14:36:12.215690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.039 [2024-11-17 14:36:12.225336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.039 [2024-11-17 14:36:12.225363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.039 [2024-11-17 14:36:12.225373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.039 [2024-11-17 14:36:12.238899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.039 [2024-11-17 14:36:12.238920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.039 [2024-11-17 14:36:12.238929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.039 [2024-11-17 14:36:12.247232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.039 [2024-11-17 14:36:12.247252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.039 [2024-11-17 14:36:12.247260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-17 14:36:12.259687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.299 [2024-11-17 14:36:12.259709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-17 14:36:12.259718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-17 14:36:12.270600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.299 [2024-11-17 14:36:12.270620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-17 14:36:12.270629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-17 14:36:12.279774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.299 [2024-11-17 14:36:12.279795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-17 14:36:12.279803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-17 14:36:12.292308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.299 [2024-11-17 14:36:12.292332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-17 14:36:12.292341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-17 14:36:12.304844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.299 [2024-11-17 14:36:12.304865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-17 14:36:12.304874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-17 14:36:12.312995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.299 [2024-11-17 14:36:12.313015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-17 14:36:12.313023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-17 14:36:12.325438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.299 [2024-11-17 14:36:12.325459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-17 14:36:12.325467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-17 14:36:12.335567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.299 [2024-11-17 14:36:12.335588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-17 14:36:12.335596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-17 14:36:12.344474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.299 [2024-11-17 14:36:12.344494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-17 14:36:12.344503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-17 14:36:12.355882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.299 [2024-11-17 14:36:12.355903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-17 14:36:12.355911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-17 14:36:12.367852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.299 [2024-11-17 14:36:12.367873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-17 14:36:12.367881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-17 14:36:12.379210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.299 [2024-11-17 14:36:12.379231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-17 14:36:12.379239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-17 14:36:12.388200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.299 [2024-11-17 14:36:12.388221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-17 14:36:12.388229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-17 14:36:12.400050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.299 [2024-11-17 14:36:12.400070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.299 [2024-11-17 14:36:12.400078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.299 [2024-11-17 14:36:12.412012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.299 [2024-11-17 14:36:12.412033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.300 [2024-11-17 14:36:12.412041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.300 [2024-11-17 14:36:12.421446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.300 [2024-11-17 14:36:12.421469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.300 [2024-11-17 14:36:12.421477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.300 [2024-11-17 14:36:12.433263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.300 [2024-11-17 14:36:12.433284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.300 [2024-11-17 14:36:12.433292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.300 [2024-11-17 14:36:12.441431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.300 [2024-11-17 14:36:12.441452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.300 [2024-11-17 14:36:12.441460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.300 [2024-11-17 14:36:12.453265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.300 [2024-11-17 14:36:12.453285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.300 [2024-11-17 14:36:12.453293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.300 [2024-11-17 14:36:12.466012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.300 [2024-11-17 14:36:12.466033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.300 [2024-11-17 14:36:12.466042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.300 [2024-11-17 14:36:12.478867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.300 [2024-11-17 14:36:12.478888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.300 [2024-11-17 14:36:12.478899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.300 [2024-11-17 14:36:12.491520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.300 [2024-11-17 14:36:12.491541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.300 [2024-11-17 14:36:12.491550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.300 [2024-11-17 14:36:12.504191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.300 [2024-11-17 14:36:12.504212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.300 [2024-11-17 14:36:12.504220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.300 [2024-11-17 14:36:12.515459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.300 [2024-11-17 14:36:12.515480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.300 [2024-11-17 14:36:12.515488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.560 [2024-11-17 14:36:12.524549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.560 [2024-11-17 14:36:12.524576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-17 14:36:12.524584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.560 [2024-11-17 14:36:12.536246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.560 [2024-11-17 14:36:12.536266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-17 14:36:12.536275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.560 [2024-11-17 14:36:12.544972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.560 [2024-11-17 14:36:12.544992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-17 14:36:12.545000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.560 [2024-11-17 14:36:12.556248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.560 [2024-11-17 14:36:12.556270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-17 14:36:12.556279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.560 [2024-11-17 14:36:12.568039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.560 [2024-11-17 14:36:12.568060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-17 14:36:12.568069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.560 [2024-11-17 14:36:12.581529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.560 [2024-11-17 14:36:12.581555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-17 14:36:12.581564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.560 [2024-11-17 14:36:12.593864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.560 [2024-11-17 14:36:12.593885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-17 14:36:12.593893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.560 [2024-11-17 14:36:12.606105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.560 [2024-11-17 14:36:12.606125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-17 14:36:12.606134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.560 [2024-11-17 14:36:12.618686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.560 [2024-11-17 14:36:12.618706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-17 14:36:12.618714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.560 [2024-11-17 14:36:12.629975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.560 [2024-11-17 14:36:12.629995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-17 14:36:12.630003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.560 [2024-11-17 14:36:12.637489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.560 [2024-11-17 14:36:12.637510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-17 14:36:12.637518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.560 [2024-11-17 14:36:12.648467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.560 [2024-11-17 14:36:12.648488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-17 14:36:12.648497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.561 [2024-11-17 14:36:12.661284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.561 [2024-11-17 14:36:12.661306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-17 14:36:12.661315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.561 [2024-11-17 14:36:12.672768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.561 [2024-11-17 14:36:12.672789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-17 14:36:12.672797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.561 [2024-11-17 14:36:12.681398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.561 [2024-11-17 14:36:12.681419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-17 14:36:12.681428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.561 [2024-11-17 14:36:12.691703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.561 [2024-11-17 14:36:12.691723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-17 14:36:12.691731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.561 [2024-11-17 14:36:12.699929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.561 [2024-11-17 14:36:12.699949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-17 14:36:12.699958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.561 [2024-11-17 14:36:12.711402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.561 [2024-11-17 14:36:12.711422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-17 14:36:12.711431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.561 [2024-11-17 14:36:12.721001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.561 [2024-11-17 14:36:12.721021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-17 14:36:12.721029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.561 [2024-11-17 14:36:12.730509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.561 [2024-11-17 14:36:12.730530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-17 14:36:12.730538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.561 [2024-11-17 14:36:12.739897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.561 [2024-11-17 14:36:12.739918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-17 14:36:12.739926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.561 [2024-11-17 14:36:12.748210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.561 [2024-11-17 14:36:12.748231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-17 14:36:12.748239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.561 [2024-11-17 14:36:12.759863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.561 [2024-11-17 14:36:12.759884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-17 14:36:12.759896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.561 [2024-11-17 14:36:12.770300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.561 [2024-11-17 14:36:12.770319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-17 14:36:12.770328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.561 [2024-11-17 14:36:12.779635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.561 [2024-11-17 14:36:12.779656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-17 14:36:12.779664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:12.788907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:12.788927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:12.788936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:12.799248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:12.799268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:12.799277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:12.810418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:12.810439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:12.810447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:12.819280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:12.819302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:12.819311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:12.831255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:12.831275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:12.831284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:12.844005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:12.844028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:12.844036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:12.852657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:12.852681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:12.852690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:12.864191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:12.864214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:12.864222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:12.876212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:12.876233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:12.876242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:12.888948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:12.888969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:12.888978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:12.900926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:12.900947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:12.900955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:12.912570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:12.912592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:12.912601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:12.921589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:12.921610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:12.921618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:12.933942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:12.933963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:12.933971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:12.945674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:12.945694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:12.945702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:12.957180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:12.957200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:12.957208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:12.965980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:12.966000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:12.966009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:12.978967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:12.978988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:12.978997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:12.989023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:12.989043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:12.989051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:12.998161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:12.998183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:12.998192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:13.009304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:13.009325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:13.009333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:13.019613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:13.019634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:13.019642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.821 [2024-11-17 14:36:13.031266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:23.821 [2024-11-17 14:36:13.031287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.821 [2024-11-17 14:36:13.031296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.082 [2024-11-17 14:36:13.042594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.082 [2024-11-17 14:36:13.042630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.082 [2024-11-17 14:36:13.042639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.082 [2024-11-17 14:36:13.051868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.082 [2024-11-17 14:36:13.051889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.082 [2024-11-17 14:36:13.051897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.082 [2024-11-17 14:36:13.061550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.082 [2024-11-17 14:36:13.061571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.082 [2024-11-17 14:36:13.061580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.082 [2024-11-17 14:36:13.070583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.082 [2024-11-17 14:36:13.070611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.082 [2024-11-17 14:36:13.070619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.082 [2024-11-17 14:36:13.079396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.082 [2024-11-17 14:36:13.079417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.082 [2024-11-17 14:36:13.079425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.082 [2024-11-17 14:36:13.089958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.082 [2024-11-17 14:36:13.089979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.082 [2024-11-17 14:36:13.089987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.082 [2024-11-17 14:36:13.100031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.082 [2024-11-17 14:36:13.100052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.082 [2024-11-17 14:36:13.100061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.082 [2024-11-17 14:36:13.110856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.082 [2024-11-17 14:36:13.110877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.082 [2024-11-17 14:36:13.110885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.082 [2024-11-17 14:36:13.119424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.082 [2024-11-17 14:36:13.119445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.082 [2024-11-17 14:36:13.119453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.082 [2024-11-17 14:36:13.130238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.082 [2024-11-17 14:36:13.130258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.082 [2024-11-17 14:36:13.130267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.082 [2024-11-17 14:36:13.139900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.082 [2024-11-17 14:36:13.139921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.082 [2024-11-17 14:36:13.139929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.082 [2024-11-17 14:36:13.150336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.082 [2024-11-17 14:36:13.150365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.082 [2024-11-17 14:36:13.150374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.082 [2024-11-17 14:36:13.159627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.082 [2024-11-17 14:36:13.159647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.082 [2024-11-17 14:36:13.159656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.082 [2024-11-17 14:36:13.168272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.082 [2024-11-17 14:36:13.168293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.082 [2024-11-17 14:36:13.168302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.082 23586.00 IOPS, 92.13 MiB/s [2024-11-17T13:36:13.307Z] [2024-11-17 14:36:13.178743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.082 [2024-11-17 14:36:13.178764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.082 [2024-11-17 14:36:13.178773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.082 [2024-11-17 14:36:13.189051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.082 [2024-11-17 14:36:13.189073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.082 [2024-11-17 14:36:13.189082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.083 [2024-11-17 14:36:13.198744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.083 [2024-11-17 14:36:13.198764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.083 [2024-11-17 14:36:13.198772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.083 [2024-11-17 14:36:13.207916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.083 [2024-11-17 14:36:13.207937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.083 [2024-11-17 14:36:13.207949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.083 [2024-11-17 14:36:13.217072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.083 [2024-11-17 14:36:13.217094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.083 [2024-11-17 14:36:13.217102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.083 [2024-11-17 14:36:13.229097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.083 [2024-11-17 14:36:13.229118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.083 [2024-11-17 14:36:13.229127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.083 [2024-11-17 14:36:13.242567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.083 [2024-11-17 14:36:13.242588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.083 [2024-11-17 14:36:13.242597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.083 [2024-11-17 14:36:13.250830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.083 [2024-11-17 14:36:13.250851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.083 [2024-11-17 14:36:13.250860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.083 [2024-11-17 14:36:13.261617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.083 [2024-11-17 14:36:13.261637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.083 [2024-11-17 14:36:13.261645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.083 [2024-11-17 14:36:13.272650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.083 [2024-11-17 14:36:13.272671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.083 [2024-11-17 14:36:13.272679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.083 [2024-11-17 14:36:13.285646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.083 [2024-11-17 14:36:13.285667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.083 [2024-11-17 14:36:13.285675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.083 [2024-11-17 14:36:13.298039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.083 [2024-11-17 14:36:13.298060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.083 [2024-11-17 14:36:13.298069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.308010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.308035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.308043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.317375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.317396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.317405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.327460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.327480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.327489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.337318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.337338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.337347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.345518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.345538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.345547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.356504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.356524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.356532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.366965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.366986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.366995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.377083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.377104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.377112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.386571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.386591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.386599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.395998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.396018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.396027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.405967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.405987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.405996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.414773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.414794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.414802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.423827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.423847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.423855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.434758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.434778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.434787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.444349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.444378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.444387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.454230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.454250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.454258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.464721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.464741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.464750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.473766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.473785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.473796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.482672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.482692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.482700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.491753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.491774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.491782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.503618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.503638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.503647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.512231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.512252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.512260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.524626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.524646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.524655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.536427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.343 [2024-11-17 14:36:13.536448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.343 [2024-11-17 14:36:13.536456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.343 [2024-11-17 14:36:13.545330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.344 [2024-11-17 14:36:13.545350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.344 [2024-11-17 14:36:13.545365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.344 [2024-11-17 14:36:13.554653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.344 [2024-11-17 14:36:13.554673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.344 [2024-11-17 14:36:13.554681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.603 [2024-11-17 14:36:13.564894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.603 [2024-11-17 14:36:13.564914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.603 [2024-11-17 14:36:13.564922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.603 [2024-11-17 14:36:13.574410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.603 [2024-11-17 14:36:13.574429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.603 [2024-11-17 14:36:13.574438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.603 [2024-11-17 14:36:13.584490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.603 [2024-11-17 14:36:13.584509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.603 [2024-11-17 14:36:13.584518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.592279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.592299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.592307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.603070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.603090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.603098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.615585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.615605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.615614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.628313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.628333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.628342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.636889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.636909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.636917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.648217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.648238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.648250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.660307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.660328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.660336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.670171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.670190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.670199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.678664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.678684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.678692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.688318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.688338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.688346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.697988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.698009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.698017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.708396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.708416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.708425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.718386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.718406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.718414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.727814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.727834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.727842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.736640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.736664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.736673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.747072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.747092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.747100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.756624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.756644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.756654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.765865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.765887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.765895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.775030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.775051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.775059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.785105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.785125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.785133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.795260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.795280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.795288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.805151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.805171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.805179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.604 [2024-11-17 14:36:13.813127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.604 [2024-11-17 14:36:13.813147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.604 [2024-11-17 14:36:13.813155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.864 [2024-11-17 14:36:13.824288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.864 [2024-11-17 14:36:13.824309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.864 [2024-11-17 14:36:13.824318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.864 [2024-11-17 14:36:13.835481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.864 [2024-11-17 14:36:13.835501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.864 [2024-11-17 14:36:13.835509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.864 [2024-11-17 14:36:13.844042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.864 [2024-11-17 14:36:13.844062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.864 [2024-11-17 14:36:13.844071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.864 [2024-11-17 14:36:13.853579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.864 [2024-11-17 14:36:13.853599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.864 [2024-11-17 14:36:13.853607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.864 [2024-11-17 14:36:13.863643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.864 [2024-11-17 14:36:13.863663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.864 [2024-11-17 14:36:13.863671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.864 [2024-11-17 14:36:13.873016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.864 [2024-11-17 14:36:13.873036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.864 [2024-11-17 14:36:13.873045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.864 [2024-11-17 14:36:13.881785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.864 [2024-11-17 14:36:13.881805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.864 [2024-11-17 14:36:13.881813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.864 [2024-11-17 14:36:13.891895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.864 [2024-11-17 14:36:13.891915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.864 [2024-11-17 14:36:13.891924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.864 [2024-11-17 14:36:13.904228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.864 [2024-11-17 14:36:13.904248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.864 [2024-11-17 14:36:13.904262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.864 [2024-11-17 14:36:13.915933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.864 [2024-11-17 14:36:13.915953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.864 [2024-11-17 14:36:13.915962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.864 [2024-11-17 14:36:13.925278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.864 [2024-11-17 14:36:13.925299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.864 [2024-11-17 14:36:13.925308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.864 [2024-11-17 14:36:13.937286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.864 [2024-11-17 14:36:13.937308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.864 [2024-11-17 14:36:13.937317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.864 [2024-11-17 14:36:13.947934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.864 [2024-11-17 14:36:13.947955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.864 [2024-11-17 14:36:13.947964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.864 [2024-11-17 14:36:13.957418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.864 [2024-11-17 14:36:13.957439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.864 [2024-11-17 14:36:13.957448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.864 [2024-11-17 14:36:13.966874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.864 [2024-11-17 14:36:13.966895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.864 [2024-11-17 14:36:13.966904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.864 [2024-11-17 14:36:13.976235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.864 [2024-11-17 14:36:13.976255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.864 [2024-11-17 14:36:13.976263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.864 [2024-11-17 14:36:13.985931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.865 [2024-11-17 14:36:13.985953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.865 [2024-11-17 14:36:13.985961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.865 [2024-11-17 14:36:13.995501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.865 [2024-11-17 14:36:13.995525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.865 [2024-11-17 14:36:13.995533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.865 [2024-11-17 14:36:14.005685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.865 [2024-11-17 14:36:14.005705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.865 [2024-11-17 14:36:14.005713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.865 [2024-11-17 14:36:14.016119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.865 [2024-11-17 14:36:14.016139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.865 [2024-11-17 14:36:14.016148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.865 [2024-11-17 14:36:14.026403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.865 [2024-11-17 14:36:14.026424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.865 [2024-11-17 14:36:14.026432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.865 [2024-11-17 14:36:14.035279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.865 [2024-11-17 14:36:14.035299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.865 [2024-11-17 14:36:14.035308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.865 [2024-11-17 14:36:14.047155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.865 [2024-11-17 14:36:14.047175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.865 [2024-11-17 14:36:14.047184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.865 [2024-11-17 14:36:14.056057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.865 [2024-11-17 14:36:14.056077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.865 [2024-11-17 14:36:14.056086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.865 [2024-11-17 14:36:14.067928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.865 [2024-11-17 14:36:14.067948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.865 [2024-11-17 14:36:14.067957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.865 [2024-11-17 14:36:14.079639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:24.865 [2024-11-17 14:36:14.079660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.865 [2024-11-17 14:36:14.079668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.125 [2024-11-17 14:36:14.090390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:25.125 [2024-11-17 14:36:14.090411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.125 [2024-11-17 14:36:14.090419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.125 [2024-11-17 14:36:14.100568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:25.125 [2024-11-17 14:36:14.100588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.125 [2024-11-17 14:36:14.100596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.125 [2024-11-17 14:36:14.108486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:25.125 [2024-11-17 14:36:14.108505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.125 [2024-11-17 14:36:14.108513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.125 [2024-11-17 14:36:14.118840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:25.125 [2024-11-17 14:36:14.118860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.125 [2024-11-17 14:36:14.118868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.125 [2024-11-17 14:36:14.128927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:25.125 [2024-11-17 14:36:14.128948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.125 [2024-11-17 14:36:14.128956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.125 [2024-11-17 14:36:14.139387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:25.125 [2024-11-17 14:36:14.139408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.125 [2024-11-17 14:36:14.139417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.125 [2024-11-17 14:36:14.150470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:25.125 [2024-11-17 14:36:14.150490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.125 [2024-11-17 14:36:14.150499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.125 [2024-11-17 14:36:14.159023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:25.125 [2024-11-17 14:36:14.159044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.125 [2024-11-17 14:36:14.159053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.125 [2024-11-17 14:36:14.170108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:25.125 [2024-11-17 14:36:14.170128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.125 [2024-11-17 14:36:14.170140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.125 24419.00 IOPS, 95.39 MiB/s [2024-11-17T13:36:14.350Z] [2024-11-17 14:36:14.179043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97370) 00:26:25.125 [2024-11-17 14:36:14.179063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.125 [2024-11-17 14:36:14.179071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.125 00:26:25.125 Latency(us) 00:26:25.125 [2024-11-17T13:36:14.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.125 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:25.125 nvme0n1 : 2.00 24435.03 95.45 0.00 0.00 5232.33 2535.96 20059.71 00:26:25.125 [2024-11-17T13:36:14.350Z] =================================================================================================================== 00:26:25.125 [2024-11-17T13:36:14.350Z] Total : 24435.03 95.45 0.00 0.00 5232.33 2535.96 20059.71 00:26:25.125 { 00:26:25.125 "results": [ 00:26:25.125 { 00:26:25.125 "job": "nvme0n1", 00:26:25.125 "core_mask": "0x2", 00:26:25.125 "workload": "randread", 00:26:25.125 "status": "finished", 00:26:25.125 "queue_depth": 128, 00:26:25.125 "io_size": 4096, 00:26:25.125 "runtime": 2.003926, 00:26:25.125 "iops": 24435.034028202637, 00:26:25.125 "mibps": 95.44935167266655, 00:26:25.125 "io_failed": 0, 00:26:25.125 "io_timeout": 0, 00:26:25.125 "avg_latency_us": 5232.327022157345, 00:26:25.125 "min_latency_us": 2535.958260869565, 00:26:25.125 "max_latency_us": 20059.714782608695 00:26:25.125 } 00:26:25.125 ], 00:26:25.125 "core_count": 1 00:26:25.125 } 00:26:25.125 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:25.125 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:25.125 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:25.125 | .driver_specific 00:26:25.125 | .nvme_error 00:26:25.125 | .status_code 00:26:25.125 | .command_transient_transport_error' 00:26:25.125 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:25.384 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 192 > 0 )) 00:26:25.384 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1610240 00:26:25.384 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1610240 ']' 00:26:25.384 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1610240 00:26:25.384 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:25.384 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:25.384 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1610240 00:26:25.384 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:25.384 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:25.384 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1610240' 00:26:25.384 killing process with pid 1610240 00:26:25.384 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1610240 00:26:25.384 Received shutdown signal, test time was about 2.000000 seconds 00:26:25.384 00:26:25.384 Latency(us) 00:26:25.384 [2024-11-17T13:36:14.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.384 [2024-11-17T13:36:14.609Z] =================================================================================================================== 00:26:25.384 [2024-11-17T13:36:14.609Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:25.384 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1610240 00:26:25.643 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:25.643 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:25.643 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:25.643 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:25.643 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:25.643 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1610888 00:26:25.644 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1610888 /var/tmp/bperf.sock 00:26:25.644 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:25.644 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1610888 ']' 00:26:25.644 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:25.644 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.644 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:25.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:25.644 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.644 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.644 [2024-11-17 14:36:14.653279] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:26:25.644 [2024-11-17 14:36:14.653326] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610888 ] 00:26:25.644 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:25.644 Zero copy mechanism will not be used. 00:26:25.644 [2024-11-17 14:36:14.726846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.644 [2024-11-17 14:36:14.769509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.644 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:25.644 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:25.644 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:25.644 14:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:25.903 14:36:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:25.903 14:36:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.903 14:36:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.903 14:36:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.903 14:36:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.903 14:36:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:26.472 nvme0n1 00:26:26.472 14:36:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:26.472 14:36:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.472 14:36:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:26.472 14:36:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.472 14:36:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:26.472 14:36:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:26.472 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:26.472 Zero copy mechanism will not be used. 00:26:26.472 Running I/O for 2 seconds... 00:26:26.472 [2024-11-17 14:36:15.593421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.472 [2024-11-17 14:36:15.593457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.472 [2024-11-17 14:36:15.593468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.472 [2024-11-17 14:36:15.598630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.472 [2024-11-17 14:36:15.598656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.472 [2024-11-17 14:36:15.598665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.472 [2024-11-17 14:36:15.603834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.472 [2024-11-17 14:36:15.603856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.472 [2024-11-17 14:36:15.603865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.472 [2024-11-17 14:36:15.609060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.472 [2024-11-17 14:36:15.609083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.472 [2024-11-17 14:36:15.609093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.472 [2024-11-17 14:36:15.614378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.472 [2024-11-17 14:36:15.614400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.472 [2024-11-17 14:36:15.614408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.472 [2024-11-17 14:36:15.619642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.472 [2024-11-17 14:36:15.619663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.472 [2024-11-17 14:36:15.619672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.472 [2024-11-17 14:36:15.624880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.472 [2024-11-17 14:36:15.624901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.472 [2024-11-17 14:36:15.624910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.472 [2024-11-17 14:36:15.630092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.472 [2024-11-17 14:36:15.630114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.472 [2024-11-17 14:36:15.630122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.472 [2024-11-17 14:36:15.635260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.472 [2024-11-17 14:36:15.635281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.472 [2024-11-17 14:36:15.635289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.472 [2024-11-17 14:36:15.640529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.472 [2024-11-17 14:36:15.640550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.472 [2024-11-17 14:36:15.640559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.472 [2024-11-17 14:36:15.645761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.472 [2024-11-17 14:36:15.645782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.472 [2024-11-17 14:36:15.645790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.472 [2024-11-17 14:36:15.651001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.472 [2024-11-17 14:36:15.651023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.472 [2024-11-17 14:36:15.651031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.472 [2024-11-17 14:36:15.656230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.472 [2024-11-17 14:36:15.656252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.472 [2024-11-17 14:36:15.656260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.472 [2024-11-17 14:36:15.661450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.472 [2024-11-17 14:36:15.661472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.472 [2024-11-17 14:36:15.661480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.472 [2024-11-17 14:36:15.666727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.472 [2024-11-17 14:36:15.666750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.472 [2024-11-17 14:36:15.666762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.472 [2024-11-17 14:36:15.672010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.472 [2024-11-17 14:36:15.672031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.472 [2024-11-17 14:36:15.672040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.472 [2024-11-17 14:36:15.677242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.472 [2024-11-17 14:36:15.677263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.472 [2024-11-17 14:36:15.677271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.472 [2024-11-17 14:36:15.682506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.472 [2024-11-17 14:36:15.682528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.472 [2024-11-17 14:36:15.682536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.472 [2024-11-17 14:36:15.687808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.472 [2024-11-17 14:36:15.687830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.472 [2024-11-17 14:36:15.687838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.693143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.693165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.693174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.698433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.698454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.698462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.703648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.703669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.703678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.708839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.708860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.708868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.714031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.714057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.714065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.719235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.719257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.719265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.724440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.724463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.724471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.729657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.729679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.729687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.734866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.734889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.734897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.740051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.740073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.740081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.745259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.745282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.745290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.750513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.750534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.750543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.755758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.755780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.755791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.760999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.761020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.761029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.766306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.766328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.766335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.771525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.771547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.771556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.776684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.776705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.776713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.781903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.781925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.781933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.787053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.787075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.787084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.792249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.792270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.792278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.797460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.797481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.797489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.803828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.803857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.803865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.808880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.808902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.808910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.813595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.813615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.734 [2024-11-17 14:36:15.813623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.734 [2024-11-17 14:36:15.818230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.734 [2024-11-17 14:36:15.818252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.818262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.822579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.822600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.822608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.827098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.827121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.827130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.831573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.831594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.831602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.836080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.836102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.836111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.840874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.840896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.840904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.845630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.845653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.845662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.850490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.850513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.850522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.855577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.855599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.855608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.861008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.861030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.861039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.866704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.866728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.866737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.872096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.872119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.872128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.877714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.877737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.877746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.883461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.883484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.883493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.889021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.889043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.889055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.894347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.894376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.894385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.899780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.899802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.899810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.905096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.905118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.905126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.910422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.910443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.910451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.915831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.915853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.915861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.921111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.921132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.921140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.926423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.926445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.926453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.931741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.931764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.931773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.937148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.937175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.937184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.942493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.942515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.942524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.735 [2024-11-17 14:36:15.947806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.735 [2024-11-17 14:36:15.947830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.735 [2024-11-17 14:36:15.947839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.996 [2024-11-17 14:36:15.953271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.996 [2024-11-17 14:36:15.953295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.996 [2024-11-17 14:36:15.953303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.996 [2024-11-17 14:36:15.958711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.996 [2024-11-17 14:36:15.958734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.996 [2024-11-17 14:36:15.958743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.996 [2024-11-17 14:36:15.964447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.996 [2024-11-17 14:36:15.964470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.996 [2024-11-17 14:36:15.964479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.996 [2024-11-17 14:36:15.969852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.996 [2024-11-17 14:36:15.969875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.996 [2024-11-17 14:36:15.969884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.996 [2024-11-17 14:36:15.975331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.996 [2024-11-17 14:36:15.975360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.996 [2024-11-17 14:36:15.975369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.996 [2024-11-17 14:36:15.980736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.996 [2024-11-17 14:36:15.980759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.996 [2024-11-17 14:36:15.980771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.996 [2024-11-17 14:36:15.986229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.996 [2024-11-17 14:36:15.986251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.996 [2024-11-17 14:36:15.986260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.996 [2024-11-17 14:36:15.991615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.996 [2024-11-17 14:36:15.991637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.996 [2024-11-17 14:36:15.991646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:15.997083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:15.997106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:15.997114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.002423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.002446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.002454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.007699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.007721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.007730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.013175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.013199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.013207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.018507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.018529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.018538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.023851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.023874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.023883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.029051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.029078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.029086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.034631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.034654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.034663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.040120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.040142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.040151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.044987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.045010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.045018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.048507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.048529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.048538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.052430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.052452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.052461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.056597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.056619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.056628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.061560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.061583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.061592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.066588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.066611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.066619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.071548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.071572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.071580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.076649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.076672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.076680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.081886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.081909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.081917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.087283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.087305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.087313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.092626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.092647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.092656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.097954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.097976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.097984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.103326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.103349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.103364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.108650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.108672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.108681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.113978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.114001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.114014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.119077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.119106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.119114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.124388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.124411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.124420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.997 [2024-11-17 14:36:16.130755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.997 [2024-11-17 14:36:16.130777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.997 [2024-11-17 14:36:16.130786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.998 [2024-11-17 14:36:16.136139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.998 [2024-11-17 14:36:16.136161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.998 [2024-11-17 14:36:16.136170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.998 [2024-11-17 14:36:16.141475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.998 [2024-11-17 14:36:16.141497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.998 [2024-11-17 14:36:16.141506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.998 [2024-11-17 14:36:16.146842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.998 [2024-11-17 14:36:16.146864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.998 [2024-11-17 14:36:16.146872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.998 [2024-11-17 14:36:16.152159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.998 [2024-11-17 14:36:16.152181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.998 [2024-11-17 14:36:16.152190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.998 [2024-11-17 14:36:16.157370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.998 [2024-11-17 14:36:16.157392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.998 [2024-11-17 14:36:16.157401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.998 [2024-11-17 14:36:16.160955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.998 [2024-11-17 14:36:16.160980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.998 [2024-11-17 14:36:16.160988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.998 [2024-11-17 14:36:16.165727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.998 [2024-11-17 14:36:16.165753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.998 [2024-11-17 14:36:16.165761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.998 [2024-11-17 14:36:16.171034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.998 [2024-11-17 14:36:16.171057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.998 [2024-11-17 14:36:16.171066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.998 [2024-11-17 14:36:16.176498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.998 [2024-11-17 14:36:16.176520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.998 [2024-11-17 14:36:16.176528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.998 [2024-11-17 14:36:16.183021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.998 [2024-11-17 14:36:16.183043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.998 [2024-11-17 14:36:16.183051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.998 [2024-11-17 14:36:16.190521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.998 [2024-11-17 14:36:16.190554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.998 [2024-11-17 14:36:16.190563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.998 [2024-11-17 14:36:16.197488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.998 [2024-11-17 14:36:16.197511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.998 [2024-11-17 14:36:16.197520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.998 [2024-11-17 14:36:16.203626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.998 [2024-11-17 14:36:16.203647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.998 [2024-11-17 14:36:16.203656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.998 [2024-11-17 14:36:16.209659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:26.998 [2024-11-17 14:36:16.209681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.998 [2024-11-17 14:36:16.209694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.258 [2024-11-17 14:36:16.216610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.258 [2024-11-17 14:36:16.216632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.258 [2024-11-17 14:36:16.216641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.258 [2024-11-17 14:36:16.224041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.258 [2024-11-17 14:36:16.224063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.258 [2024-11-17 14:36:16.224071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.258 [2024-11-17 14:36:16.231311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.258 [2024-11-17 14:36:16.231334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.258 [2024-11-17 14:36:16.231342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.258 [2024-11-17 14:36:16.239322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.258 [2024-11-17 14:36:16.239343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.258 [2024-11-17 14:36:16.239357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.258 [2024-11-17 14:36:16.246662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.258 [2024-11-17 14:36:16.246685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.258 [2024-11-17 14:36:16.246694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.258 [2024-11-17 14:36:16.254518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.254540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.254549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.261963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.261985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.261994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.269468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.269491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.269499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.275915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.275942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.275951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.283751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.283774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.283783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.291365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.291388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.291397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.299003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.299026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.299035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.306958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.306980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.306989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.313842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.313864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.313873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.319686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.319708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.319717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.326092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.326114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.326122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.332209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.332231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.332240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.338124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.338146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.338154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.343033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.343055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.343063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.347867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.347889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.347897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.352944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.352967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.352976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.358107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.358130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.358138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.363242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.363263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.363271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.368470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.368492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.368501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.373737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.373759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.373768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.379010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.379032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.379044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.384240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.384261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.384269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.389550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.389572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.389580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.394879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.394901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.394909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.400172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.400194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.400202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.405462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.405483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.405491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.259 [2024-11-17 14:36:16.410694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.259 [2024-11-17 14:36:16.410716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.259 [2024-11-17 14:36:16.410724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.260 [2024-11-17 14:36:16.416047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.260 [2024-11-17 14:36:16.416069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.260 [2024-11-17 14:36:16.416077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.260 [2024-11-17 14:36:16.421336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.260 [2024-11-17 14:36:16.421364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.260 [2024-11-17 14:36:16.421372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.260 [2024-11-17 14:36:16.426577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.260 [2024-11-17 14:36:16.426602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.260 [2024-11-17 14:36:16.426610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.260 [2024-11-17 14:36:16.431828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.260 [2024-11-17 14:36:16.431850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.260 [2024-11-17 14:36:16.431858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.260 [2024-11-17 14:36:16.437092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.260 [2024-11-17 14:36:16.437113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.260 [2024-11-17 14:36:16.437122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.260 [2024-11-17 14:36:16.442590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.260 [2024-11-17 14:36:16.442612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.260 [2024-11-17 14:36:16.442620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.260 [2024-11-17 14:36:16.448203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.260 [2024-11-17 14:36:16.448225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.260 [2024-11-17 14:36:16.448233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.260 [2024-11-17 14:36:16.454103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.260 [2024-11-17 14:36:16.454125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.260 [2024-11-17 14:36:16.454133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.260 [2024-11-17 14:36:16.459613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.260 [2024-11-17 14:36:16.459635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.260 [2024-11-17 14:36:16.459643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.260 [2024-11-17 14:36:16.465551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.260 [2024-11-17 14:36:16.465573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.260 [2024-11-17 14:36:16.465582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.260 [2024-11-17 14:36:16.471017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.260 [2024-11-17 14:36:16.471038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.260 [2024-11-17 14:36:16.471047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.260 [2024-11-17 14:36:16.477381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.260 [2024-11-17 14:36:16.477404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.260 [2024-11-17 14:36:16.477412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.520 [2024-11-17 14:36:16.484743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.520 [2024-11-17 14:36:16.484765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.520 [2024-11-17 14:36:16.484774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.520 [2024-11-17 14:36:16.491873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.520 [2024-11-17 14:36:16.491895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.520 [2024-11-17 14:36:16.491904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.520 [2024-11-17 14:36:16.498230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.520 [2024-11-17 14:36:16.498252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.520 [2024-11-17 14:36:16.498260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.520 [2024-11-17 14:36:16.504978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.520 [2024-11-17 14:36:16.505000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.520 [2024-11-17 14:36:16.505009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.520 [2024-11-17 14:36:16.511648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.520 [2024-11-17 14:36:16.511671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.520 [2024-11-17 14:36:16.511680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.518212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.518235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.518243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.524576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.524598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.524606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.531112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.531135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.531150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.537083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.537106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.537115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.543660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.543682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.543690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.549232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.549254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.549262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.554584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.554606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.554615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.560279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.560301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.560309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.566424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.566446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.566454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.572908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.572930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.572939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.579720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.579742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.579751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.521 5539.00 IOPS, 692.38 MiB/s [2024-11-17T13:36:16.746Z] [2024-11-17 14:36:16.587730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.587752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.587760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.594463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.594487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.594496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.600626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.600649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.600658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.608006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.608029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.608038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.615546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.615569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.615578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.623593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.623615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.623625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.631014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.631037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.631046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.637875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.637899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.637908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.646167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.646191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.646203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.654269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.654292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.654301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.661976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.661998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.662007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.670283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.670305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.670315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.678621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.678644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.678652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.686956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.686979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.686988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.695265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.695288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.521 [2024-11-17 14:36:16.695297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.521 [2024-11-17 14:36:16.703590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.521 [2024-11-17 14:36:16.703612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.522 [2024-11-17 14:36:16.703622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.522 [2024-11-17 14:36:16.712119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.522 [2024-11-17 14:36:16.712141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.522 [2024-11-17 14:36:16.712150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.522 [2024-11-17 14:36:16.719919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.522 [2024-11-17 14:36:16.719946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.522 [2024-11-17 14:36:16.719955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.522 [2024-11-17 14:36:16.728414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.522 [2024-11-17 14:36:16.728437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.522 [2024-11-17 14:36:16.728445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.522 [2024-11-17 14:36:16.735705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.522 [2024-11-17 14:36:16.735727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.522 [2024-11-17 14:36:16.735736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.522 [2024-11-17 14:36:16.740130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.522 [2024-11-17 14:36:16.740152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.522 [2024-11-17 14:36:16.740161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.781 [2024-11-17 14:36:16.746747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.781 [2024-11-17 14:36:16.746770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.781 [2024-11-17 14:36:16.746779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.781 [2024-11-17 14:36:16.753042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.781 [2024-11-17 14:36:16.753065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.781 [2024-11-17 14:36:16.753074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.781 [2024-11-17 14:36:16.759546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.781 [2024-11-17 14:36:16.759569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.781 [2024-11-17 14:36:16.759577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.781 [2024-11-17 14:36:16.766459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.781 [2024-11-17 14:36:16.766481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.781 [2024-11-17 14:36:16.766490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.781 [2024-11-17 14:36:16.774196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.781 [2024-11-17 14:36:16.774219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.781 [2024-11-17 14:36:16.774228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.781 [2024-11-17 14:36:16.781122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.781 [2024-11-17 14:36:16.781144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.781 [2024-11-17 14:36:16.781153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.787369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.787392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.787401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.793461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.793482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.793491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.799551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.799574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.799583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.807056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.807079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.807087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.815039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.815062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.815070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.821769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.821792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.821801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.828421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.828444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.828452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.834878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.834901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.834914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.841103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.841126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.841136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.848735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.848759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.848769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.856250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.856274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.856283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.864035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.864060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.864069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.872212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.872235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.872243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.880015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.880039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.880047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.887167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.887189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.887198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.893038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.893060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.893068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.898477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.898499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.898508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.904471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.904493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.904502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.908278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.908300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.908309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.914772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.914794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.914803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.920714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.920736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.920745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.926167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.926189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.926198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.931846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.931868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.931876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.937475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.937497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.937505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.943056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.943077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.943089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.948836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.948858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.782 [2024-11-17 14:36:16.948867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.782 [2024-11-17 14:36:16.954477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.782 [2024-11-17 14:36:16.954499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.783 [2024-11-17 14:36:16.954508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.783 [2024-11-17 14:36:16.961538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.783 [2024-11-17 14:36:16.961560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.783 [2024-11-17 14:36:16.961569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.783 [2024-11-17 14:36:16.968934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.783 [2024-11-17 14:36:16.968957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.783 [2024-11-17 14:36:16.968966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.783 [2024-11-17 14:36:16.976807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.783 [2024-11-17 14:36:16.976830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.783 [2024-11-17 14:36:16.976840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.783 [2024-11-17 14:36:16.985419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.783 [2024-11-17 14:36:16.985443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.783 [2024-11-17 14:36:16.985452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.783 [2024-11-17 14:36:16.992646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.783 [2024-11-17 14:36:16.992669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.783 [2024-11-17 14:36:16.992677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.783 [2024-11-17 14:36:16.999849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:27.783 [2024-11-17 14:36:16.999871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.783 [2024-11-17 14:36:16.999880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.043 [2024-11-17 14:36:17.006732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.043 [2024-11-17 14:36:17.006758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.043 [2024-11-17 14:36:17.006767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.043 [2024-11-17 14:36:17.012393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.043 [2024-11-17 14:36:17.012417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.043 [2024-11-17 14:36:17.012425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.017966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.017988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.017997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.023642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.023665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.023673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.029584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.029607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.029615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.035080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.035101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.035109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.040551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.040572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.040581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.046157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.046179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.046187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.051845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.051867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.051875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.057554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.057577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.057585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.062524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.062546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.062554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.065570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.065592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.065601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.071178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.071200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.071208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.076390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.076412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.076421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.081630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.081650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.081659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.086949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.086971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.086980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.092222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.092245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.092253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.097653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.097674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.097686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.103144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.103166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.103174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.108602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.108623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.108631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.114178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.114200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.114208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.119653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.119674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.119683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.125279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.125302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.125310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.130658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.130679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.130688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.136093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.136115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.136124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.141222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.141243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.141252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.146412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.146434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.146442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.151515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.151537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.044 [2024-11-17 14:36:17.151545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.044 [2024-11-17 14:36:17.156381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.044 [2024-11-17 14:36:17.156404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.045 [2024-11-17 14:36:17.156413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.045 [2024-11-17 14:36:17.161317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.045 [2024-11-17 14:36:17.161339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.045 [2024-11-17 14:36:17.161350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.045 [2024-11-17 14:36:17.166259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.045 [2024-11-17 14:36:17.166282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.045 [2024-11-17 14:36:17.166291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.045 [2024-11-17 14:36:17.171263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.045 [2024-11-17 14:36:17.171285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.045 [2024-11-17 14:36:17.171294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.045 [2024-11-17 14:36:17.176365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.045 [2024-11-17 14:36:17.176389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.045 [2024-11-17 14:36:17.176398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.045 [2024-11-17 14:36:17.181746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.045 [2024-11-17 14:36:17.181769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.045 [2024-11-17 14:36:17.181777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.045 [2024-11-17 14:36:17.186827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.045 [2024-11-17 14:36:17.186849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.045 [2024-11-17 14:36:17.186862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.045 [2024-11-17 14:36:17.192122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.045 [2024-11-17 14:36:17.192144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.045 [2024-11-17 14:36:17.192153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.045 [2024-11-17 14:36:17.197222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.045 [2024-11-17 14:36:17.197244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.045 [2024-11-17 14:36:17.197254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.045 [2024-11-17 14:36:17.202552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.045 [2024-11-17 14:36:17.202575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.045 [2024-11-17 14:36:17.202583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.045 [2024-11-17 14:36:17.208057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.045 [2024-11-17 14:36:17.208078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.045 [2024-11-17 14:36:17.208087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.045 [2024-11-17 14:36:17.213391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.045 [2024-11-17 14:36:17.213414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.045 [2024-11-17 14:36:17.213422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.045 [2024-11-17 14:36:17.218919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.045 [2024-11-17 14:36:17.218941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.045 [2024-11-17 14:36:17.218949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.045 [2024-11-17 14:36:17.224191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.045 [2024-11-17 14:36:17.224214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.045 [2024-11-17 14:36:17.224222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.045 [2024-11-17 14:36:17.229680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.045 [2024-11-17 14:36:17.229701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.045 [2024-11-17 14:36:17.229709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.045 [2024-11-17 14:36:17.235125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.045 [2024-11-17 14:36:17.235151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.045 [2024-11-17 14:36:17.235159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.045 [2024-11-17 14:36:17.240608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.045 [2024-11-17 14:36:17.240629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.045 [2024-11-17 14:36:17.240638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.045 [2024-11-17 14:36:17.246039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.045 [2024-11-17 14:36:17.246062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.045 [2024-11-17 14:36:17.246070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.045 [2024-11-17 14:36:17.251449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.045 [2024-11-17 14:36:17.251470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.045 [2024-11-17 14:36:17.251478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.045 [2024-11-17 14:36:17.256561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.045 [2024-11-17 14:36:17.256584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.045 [2024-11-17 14:36:17.256593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.045 [2024-11-17 14:36:17.261893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.045 [2024-11-17 14:36:17.261916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.045 [2024-11-17 14:36:17.261924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.306 [2024-11-17 14:36:17.267113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.306 [2024-11-17 14:36:17.267135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.306 [2024-11-17 14:36:17.267144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.306 [2024-11-17 14:36:17.272409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.306 [2024-11-17 14:36:17.272430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.306 [2024-11-17 14:36:17.272439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.306 [2024-11-17 14:36:17.277670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.306 [2024-11-17 14:36:17.277691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.306 [2024-11-17 14:36:17.277700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.306 [2024-11-17 14:36:17.283119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.306 [2024-11-17 14:36:17.283141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.306 [2024-11-17 14:36:17.283149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.306 [2024-11-17 14:36:17.288480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.306 [2024-11-17 14:36:17.288501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.306 [2024-11-17 14:36:17.288509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.306 [2024-11-17 14:36:17.293865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.306 [2024-11-17 14:36:17.293886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.306 [2024-11-17 14:36:17.293894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.306 [2024-11-17 14:36:17.299262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.306 [2024-11-17 14:36:17.299283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.306 [2024-11-17 14:36:17.299292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.306 [2024-11-17 14:36:17.304628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.306 [2024-11-17 14:36:17.304652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.306 [2024-11-17 14:36:17.304660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.306 [2024-11-17 14:36:17.309612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.306 [2024-11-17 14:36:17.309633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.306 [2024-11-17 14:36:17.309641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.306 [2024-11-17 14:36:17.314844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.306 [2024-11-17 14:36:17.314866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.306 [2024-11-17 14:36:17.314874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.306 [2024-11-17 14:36:17.320174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.306 [2024-11-17 14:36:17.320196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.306 [2024-11-17 14:36:17.320205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.325436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.325458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.325472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.329694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.329716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.329724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.332675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.332696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.332705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.337985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.338007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.338015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.343660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.343682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.343692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.351437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.351460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.351468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.358430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.358452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.358461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.365019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.365043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.365053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.370524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.370547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.370556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.375973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.376000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.376008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.381391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.381413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.381421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.386866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.386888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.386896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.392325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.392346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.392361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.397875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.397896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.397904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.403274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.403296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.403304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.408716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.408736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.408745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.414003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.414025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.414033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.419197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.419218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.419227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.424392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.424414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.424421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.429794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.429817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.429825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.435172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.435195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.435203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.440610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.440632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.440641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.446052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.446074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.446082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.451595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.451616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.451625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.457023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.457044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.457052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.462446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.462469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.307 [2024-11-17 14:36:17.462477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.307 [2024-11-17 14:36:17.467932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.307 [2024-11-17 14:36:17.467954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.308 [2024-11-17 14:36:17.467966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.308 [2024-11-17 14:36:17.473493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.308 [2024-11-17 14:36:17.473515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.308 [2024-11-17 14:36:17.473523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.308 [2024-11-17 14:36:17.479115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.308 [2024-11-17 14:36:17.479136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.308 [2024-11-17 14:36:17.479144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.308 [2024-11-17 14:36:17.484602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.308 [2024-11-17 14:36:17.484625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.308 [2024-11-17 14:36:17.484634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.308 [2024-11-17 14:36:17.490124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.308 [2024-11-17 14:36:17.490147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.308 [2024-11-17 14:36:17.490155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.308 [2024-11-17 14:36:17.495569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.308 [2024-11-17 14:36:17.495592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.308 [2024-11-17 14:36:17.495600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.308 [2024-11-17 14:36:17.501003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.308 [2024-11-17 14:36:17.501025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.308 [2024-11-17 14:36:17.501033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.308 [2024-11-17 14:36:17.506523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.308 [2024-11-17 14:36:17.506545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.308 [2024-11-17 14:36:17.506554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.308 [2024-11-17 14:36:17.512017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.308 [2024-11-17 14:36:17.512038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.308 [2024-11-17 14:36:17.512047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.308 [2024-11-17 14:36:17.517316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.308 [2024-11-17 14:36:17.517338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.308 [2024-11-17 14:36:17.517346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.308 [2024-11-17 14:36:17.522623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.308 [2024-11-17 14:36:17.522645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.308 [2024-11-17 14:36:17.522664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.568 [2024-11-17 14:36:17.527972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.568 [2024-11-17 14:36:17.527993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.568 [2024-11-17 14:36:17.528002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.568 [2024-11-17 14:36:17.533302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.568 [2024-11-17 14:36:17.533324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.568 [2024-11-17 14:36:17.533333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.568 [2024-11-17 14:36:17.538581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.568 [2024-11-17 14:36:17.538604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.568 [2024-11-17 14:36:17.538612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.568 [2024-11-17 14:36:17.543894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.568 [2024-11-17 14:36:17.543915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.568 [2024-11-17 14:36:17.543923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.568 [2024-11-17 14:36:17.549183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.568 [2024-11-17 14:36:17.549205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.568 [2024-11-17 14:36:17.549213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.568 [2024-11-17 14:36:17.554478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.568 [2024-11-17 14:36:17.554501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.568 [2024-11-17 14:36:17.554509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.568 [2024-11-17 14:36:17.559809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.568 [2024-11-17 14:36:17.559832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.568 [2024-11-17 14:36:17.559843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.568 [2024-11-17 14:36:17.565117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.568 [2024-11-17 14:36:17.565140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.568 [2024-11-17 14:36:17.565148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.568 [2024-11-17 14:36:17.570413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.568 [2024-11-17 14:36:17.570434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.568 [2024-11-17 14:36:17.570443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.568 [2024-11-17 14:36:17.575732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.568 [2024-11-17 14:36:17.575753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.568 [2024-11-17 14:36:17.575761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.568 [2024-11-17 14:36:17.581071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.568 [2024-11-17 14:36:17.581093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.568 [2024-11-17 14:36:17.581101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.568 [2024-11-17 14:36:17.586419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e74570) 00:26:28.568 [2024-11-17 14:36:17.586441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.568 [2024-11-17 14:36:17.586449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.568 5378.50 IOPS, 672.31 MiB/s 00:26:28.568 Latency(us) 00:26:28.568 [2024-11-17T13:36:17.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.568 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:28.568 nvme0n1 : 2.00 5379.95 672.49 0.00 0.00 2971.30 633.99 9118.05 00:26:28.568 [2024-11-17T13:36:17.793Z] =================================================================================================================== 00:26:28.568 [2024-11-17T13:36:17.793Z] Total : 5379.95 672.49 0.00 0.00 2971.30 633.99 9118.05 00:26:28.568 { 00:26:28.568 "results": [ 00:26:28.568 { 00:26:28.568 "job": "nvme0n1", 00:26:28.568 "core_mask": "0x2", 00:26:28.568 "workload": "randread", 00:26:28.568 "status": "finished", 00:26:28.568 "queue_depth": 16, 00:26:28.568 "io_size": 131072, 00:26:28.568 "runtime": 2.002434, 00:26:28.568 "iops": 5379.9525976886125, 00:26:28.568 "mibps": 672.4940747110766, 00:26:28.568 "io_failed": 0, 00:26:28.568 "io_timeout": 0, 00:26:28.568 "avg_latency_us": 2971.295157055279, 00:26:28.568 "min_latency_us": 633.9895652173913, 00:26:28.568 "max_latency_us": 9118.052173913044 00:26:28.568 } 00:26:28.568 ], 00:26:28.568 "core_count": 1 00:26:28.568 } 00:26:28.568 14:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:28.568 14:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:28.568 14:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:28.568 | .driver_specific 00:26:28.568 | .nvme_error 00:26:28.568 | .status_code 00:26:28.568 | .command_transient_transport_error' 00:26:28.568 14:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:28.828 14:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 347 > 0 )) 00:26:28.828 14:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1610888 00:26:28.828 14:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1610888 ']' 00:26:28.828 14:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1610888 00:26:28.828 14:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:28.828 14:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:28.828 14:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1610888 00:26:28.828 14:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:28.828 14:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:28.828 14:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1610888' 00:26:28.828 killing process with pid 1610888 00:26:28.828 14:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1610888 00:26:28.828 Received shutdown signal, test time was about 2.000000 seconds 00:26:28.828 00:26:28.828 Latency(us) 00:26:28.828 [2024-11-17T13:36:18.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.828 [2024-11-17T13:36:18.053Z] =================================================================================================================== 00:26:28.828 [2024-11-17T13:36:18.053Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:28.828 14:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1610888 00:26:28.828 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:28.828 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:28.828 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:28.828 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:28.828 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:28.828 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1611405 00:26:28.828 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1611405 /var/tmp/bperf.sock 00:26:28.828 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:28.828 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1611405 ']' 00:26:28.828 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:28.828 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.828 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:28.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:28.828 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.828 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.087 [2024-11-17 14:36:18.066261] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:26:29.087 [2024-11-17 14:36:18.066310] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611405 ] 00:26:29.087 [2024-11-17 14:36:18.140712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.087 [2024-11-17 14:36:18.183232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.087 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.087 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:29.087 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:29.087 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:29.346 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:29.346 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.347 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.347 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.347 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.347 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.606 nvme0n1 00:26:29.606 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:29.606 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.606 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.606 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.606 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:29.606 14:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:29.606 Running I/O for 2 seconds... 00:26:29.606 [2024-11-17 14:36:18.824564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f0788 00:26:29.606 [2024-11-17 14:36:18.825685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.606 [2024-11-17 14:36:18.825716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.865 [2024-11-17 14:36:18.834210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e0ea0 00:26:29.865 [2024-11-17 14:36:18.834832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.865 [2024-11-17 14:36:18.834854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.865 [2024-11-17 14:36:18.843692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ecc78 00:26:29.865 [2024-11-17 14:36:18.844772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.865 [2024-11-17 14:36:18.844796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.865 [2024-11-17 14:36:18.853078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f9b30 00:26:29.865 [2024-11-17 14:36:18.853718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.865 [2024-11-17 14:36:18.853739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.865 [2024-11-17 14:36:18.861847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f4f40 00:26:29.865 [2024-11-17 14:36:18.862405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.865 [2024-11-17 14:36:18.862426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.865 [2024-11-17 14:36:18.871436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166eff18 00:26:29.865 [2024-11-17 14:36:18.872166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.865 [2024-11-17 14:36:18.872186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.865 [2024-11-17 14:36:18.880532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166eff18 00:26:29.865 [2024-11-17 14:36:18.881400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.865 [2024-11-17 14:36:18.881420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.865 [2024-11-17 14:36:18.889131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f92c0 00:26:29.865 [2024-11-17 14:36:18.889936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.865 [2024-11-17 14:36:18.889956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.865 [2024-11-17 14:36:18.898834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ebfd0 00:26:29.865 [2024-11-17 14:36:18.899695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.865 [2024-11-17 14:36:18.899715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.865 [2024-11-17 14:36:18.908519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ecc78 00:26:29.865 [2024-11-17 14:36:18.909503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.865 [2024-11-17 14:36:18.909523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.865 [2024-11-17 14:36:18.918202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f35f0 00:26:29.865 [2024-11-17 14:36:18.919372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.865 [2024-11-17 14:36:18.919391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.866 [2024-11-17 14:36:18.927870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f92c0 00:26:29.866 [2024-11-17 14:36:18.929184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.866 [2024-11-17 14:36:18.929204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.866 [2024-11-17 14:36:18.937521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f2510 00:26:29.866 [2024-11-17 14:36:18.938855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.866 [2024-11-17 14:36:18.938875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.866 [2024-11-17 14:36:18.945515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fe720 00:26:29.866 [2024-11-17 14:36:18.946497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.866 [2024-11-17 14:36:18.946516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.866 [2024-11-17 14:36:18.954705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fe720 00:26:29.866 [2024-11-17 14:36:18.955654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.866 [2024-11-17 14:36:18.955673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.866 [2024-11-17 14:36:18.963918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fe720 00:26:29.866 [2024-11-17 14:36:18.964854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.866 [2024-11-17 14:36:18.964874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.866 [2024-11-17 14:36:18.973204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fe720 00:26:29.866 [2024-11-17 14:36:18.974304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.866 [2024-11-17 14:36:18.974324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.866 [2024-11-17 14:36:18.982552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fe720 00:26:29.866 [2024-11-17 14:36:18.983485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.866 [2024-11-17 14:36:18.983503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.866 [2024-11-17 14:36:18.991112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f0350 00:26:29.866 [2024-11-17 14:36:18.991944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.866 [2024-11-17 14:36:18.991964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.866 [2024-11-17 14:36:19.000797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166eff18 00:26:29.866 [2024-11-17 14:36:19.001837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.866 [2024-11-17 14:36:19.001857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.866 [2024-11-17 14:36:19.012171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ee5c8 00:26:29.866 [2024-11-17 14:36:19.013716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.866 [2024-11-17 14:36:19.013735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.866 [2024-11-17 14:36:19.018666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f20d8 00:26:29.866 [2024-11-17 14:36:19.019342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.866 [2024-11-17 14:36:19.019365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.866 [2024-11-17 14:36:19.028325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fdeb0 00:26:29.866 [2024-11-17 14:36:19.029133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.866 [2024-11-17 14:36:19.029152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.866 [2024-11-17 14:36:19.037681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ebb98 00:26:29.866 [2024-11-17 14:36:19.038536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.866 [2024-11-17 14:36:19.038556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.866 [2024-11-17 14:36:19.047204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e9e10 00:26:29.866 [2024-11-17 14:36:19.048135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.866 [2024-11-17 14:36:19.048154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.866 [2024-11-17 14:36:19.055912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ec408 00:26:29.866 [2024-11-17 14:36:19.056830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.866 [2024-11-17 14:36:19.056848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.866 [2024-11-17 14:36:19.066155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166eaab8 00:26:29.866 [2024-11-17 14:36:19.067205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.866 [2024-11-17 14:36:19.067225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.866 [2024-11-17 14:36:19.075811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f7100 00:26:29.866 [2024-11-17 14:36:19.077059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.866 [2024-11-17 14:36:19.077079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.866 [2024-11-17 14:36:19.085141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e7818 00:26:30.126 [2024-11-17 14:36:19.086164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.086188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.093703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fa3a0 00:26:30.126 [2024-11-17 14:36:19.094985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.095005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.102234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166df988 00:26:30.126 [2024-11-17 14:36:19.102979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.102999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.111650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f7538 00:26:30.126 [2024-11-17 14:36:19.112145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.112164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.122250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f35f0 00:26:30.126 [2024-11-17 14:36:19.123534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.123553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.131900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f1430 00:26:30.126 [2024-11-17 14:36:19.133300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.133319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.141544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fef90 00:26:30.126 [2024-11-17 14:36:19.143086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.143104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.148031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e95a0 00:26:30.126 [2024-11-17 14:36:19.148721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.148740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.156736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f1ca0 00:26:30.126 [2024-11-17 14:36:19.157405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.157424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.166380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fc998 00:26:30.126 [2024-11-17 14:36:19.167196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.167215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.176671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fac10 00:26:30.126 [2024-11-17 14:36:19.177658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.177677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.186111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ed920 00:26:30.126 [2024-11-17 14:36:19.186839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.186858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.194819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e12d8 00:26:30.126 [2024-11-17 14:36:19.196077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.196096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.203334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f81e0 00:26:30.126 [2024-11-17 14:36:19.204049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.204068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.212548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ddc00 00:26:30.126 [2024-11-17 14:36:19.213274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.213293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.222114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ed4e8 00:26:30.126 [2024-11-17 14:36:19.222939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.222958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.230821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ed0b0 00:26:30.126 [2024-11-17 14:36:19.231525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.231544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.240309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166eb760 00:26:30.126 [2024-11-17 14:36:19.241005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.241024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.250771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f6458 00:26:30.126 [2024-11-17 14:36:19.251917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.251936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.259384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e99d8 00:26:30.126 [2024-11-17 14:36:19.260222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.260240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.268559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e12d8 00:26:30.126 [2024-11-17 14:36:19.269404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.269424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.277936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166dece0 00:26:30.126 [2024-11-17 14:36:19.278834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.278854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.287311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ef270 00:26:30.126 [2024-11-17 14:36:19.288152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.288171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.296891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f1430 00:26:30.126 [2024-11-17 14:36:19.297529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.126 [2024-11-17 14:36:19.297549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.126 [2024-11-17 14:36:19.306339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e84c0 00:26:30.127 [2024-11-17 14:36:19.307207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.127 [2024-11-17 14:36:19.307228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:30.127 [2024-11-17 14:36:19.316811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e84c0 00:26:30.127 [2024-11-17 14:36:19.318225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.127 [2024-11-17 14:36:19.318245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:30.127 [2024-11-17 14:36:19.326530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e95a0 00:26:30.127 [2024-11-17 14:36:19.327993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.127 [2024-11-17 14:36:19.328018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:30.127 [2024-11-17 14:36:19.333156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f2510 00:26:30.127 [2024-11-17 14:36:19.333892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.127 [2024-11-17 14:36:19.333912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:30.127 [2024-11-17 14:36:19.344936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f5be8 00:26:30.385 [2024-11-17 14:36:19.346372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.385 [2024-11-17 14:36:19.346392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:30.385 [2024-11-17 14:36:19.352801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f6890 00:26:30.385 [2024-11-17 14:36:19.353404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.385 [2024-11-17 14:36:19.353424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.385 [2024-11-17 14:36:19.362115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ee190 00:26:30.385 [2024-11-17 14:36:19.363056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.385 [2024-11-17 14:36:19.363074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.385 [2024-11-17 14:36:19.371585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fef90 00:26:30.385 [2024-11-17 14:36:19.372414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.385 [2024-11-17 14:36:19.372434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.385 [2024-11-17 14:36:19.380269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e3060 00:26:30.385 [2024-11-17 14:36:19.381531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.385 [2024-11-17 14:36:19.381549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:30.385 [2024-11-17 14:36:19.388190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f5378 00:26:30.385 [2024-11-17 14:36:19.388889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.385 [2024-11-17 14:36:19.388908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:30.385 [2024-11-17 14:36:19.398454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f7970 00:26:30.385 [2024-11-17 14:36:19.399302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.385 [2024-11-17 14:36:19.399321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:30.385 [2024-11-17 14:36:19.407033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e73e0 00:26:30.385 [2024-11-17 14:36:19.407860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.385 [2024-11-17 14:36:19.407883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:30.385 [2024-11-17 14:36:19.416748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166eb760 00:26:30.385 [2024-11-17 14:36:19.417677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.385 [2024-11-17 14:36:19.417696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:30.385 [2024-11-17 14:36:19.426388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ee5c8 00:26:30.385 [2024-11-17 14:36:19.427417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.385 [2024-11-17 14:36:19.427435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:30.385 [2024-11-17 14:36:19.436019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f3e60 00:26:30.385 [2024-11-17 14:36:19.437171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.385 [2024-11-17 14:36:19.437190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.385 [2024-11-17 14:36:19.445657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e73e0 00:26:30.385 [2024-11-17 14:36:19.446961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.385 [2024-11-17 14:36:19.446980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:30.385 [2024-11-17 14:36:19.454214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166feb58 00:26:30.385 [2024-11-17 14:36:19.455165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.385 [2024-11-17 14:36:19.455185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.386 [2024-11-17 14:36:19.463559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166efae0 00:26:30.386 [2024-11-17 14:36:19.464287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.386 [2024-11-17 14:36:19.464306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.386 [2024-11-17 14:36:19.472285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f46d0 00:26:30.386 [2024-11-17 14:36:19.473606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.386 [2024-11-17 14:36:19.473625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:30.386 [2024-11-17 14:36:19.480804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fe2e8 00:26:30.386 [2024-11-17 14:36:19.481544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.386 [2024-11-17 14:36:19.481562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:30.386 [2024-11-17 14:36:19.490022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e4578 00:26:30.386 [2024-11-17 14:36:19.490715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.386 [2024-11-17 14:36:19.490734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:30.386 [2024-11-17 14:36:19.499248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f5378 00:26:30.386 [2024-11-17 14:36:19.499973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.386 [2024-11-17 14:36:19.499993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:30.386 [2024-11-17 14:36:19.510794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f0ff8 00:26:30.386 [2024-11-17 14:36:19.512222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.386 [2024-11-17 14:36:19.512241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.386 [2024-11-17 14:36:19.517492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e6300 00:26:30.386 [2024-11-17 14:36:19.518211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.386 [2024-11-17 14:36:19.518231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:30.386 [2024-11-17 14:36:19.528832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ebfd0 00:26:30.386 [2024-11-17 14:36:19.529907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.386 [2024-11-17 14:36:19.529926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:30.386 [2024-11-17 14:36:19.538169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166eea00 00:26:30.386 [2024-11-17 14:36:19.539268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.386 [2024-11-17 14:36:19.539287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.386 [2024-11-17 14:36:19.547411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e0ea0 00:26:30.386 [2024-11-17 14:36:19.548508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.386 [2024-11-17 14:36:19.548527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.386 [2024-11-17 14:36:19.556613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166de470 00:26:30.386 [2024-11-17 14:36:19.557724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.386 [2024-11-17 14:36:19.557743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.386 [2024-11-17 14:36:19.565829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e73e0 00:26:30.386 [2024-11-17 14:36:19.566935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.386 [2024-11-17 14:36:19.566954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.386 [2024-11-17 14:36:19.575074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e4de8 00:26:30.386 [2024-11-17 14:36:19.576069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.386 [2024-11-17 14:36:19.576089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.386 [2024-11-17 14:36:19.584605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e5658 00:26:30.386 [2024-11-17 14:36:19.585727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.386 [2024-11-17 14:36:19.585747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.386 [2024-11-17 14:36:19.592130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f6cc8 00:26:30.386 [2024-11-17 14:36:19.592796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.386 [2024-11-17 14:36:19.592815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:30.386 [2024-11-17 14:36:19.602126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166efae0 00:26:30.386 [2024-11-17 14:36:19.603115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.386 [2024-11-17 14:36:19.603134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:30.645 [2024-11-17 14:36:19.612019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ebb98 00:26:30.645 [2024-11-17 14:36:19.613092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.645 [2024-11-17 14:36:19.613111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:30.645 [2024-11-17 14:36:19.621448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e0a68 00:26:30.645 [2024-11-17 14:36:19.622517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.645 [2024-11-17 14:36:19.622536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.645 [2024-11-17 14:36:19.630523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fdeb0 00:26:30.645 [2024-11-17 14:36:19.631144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.645 [2024-11-17 14:36:19.631163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:30.645 [2024-11-17 14:36:19.639494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f9f68 00:26:30.645 [2024-11-17 14:36:19.640509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.645 [2024-11-17 14:36:19.640528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:30.645 [2024-11-17 14:36:19.648836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e3d08 00:26:30.645 [2024-11-17 14:36:19.649702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.645 [2024-11-17 14:36:19.649724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.645 [2024-11-17 14:36:19.659040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166dece0 00:26:30.645 [2024-11-17 14:36:19.660238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.645 [2024-11-17 14:36:19.660257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:30.645 [2024-11-17 14:36:19.666992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f0ff8 00:26:30.645 [2024-11-17 14:36:19.667635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.645 [2024-11-17 14:36:19.667655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.645 [2024-11-17 14:36:19.675489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fa3a0 00:26:30.645 [2024-11-17 14:36:19.676204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.645 [2024-11-17 14:36:19.676223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:30.645 [2024-11-17 14:36:19.684838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f1ca0 00:26:30.645 [2024-11-17 14:36:19.685476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.645 [2024-11-17 14:36:19.685494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:30.645 [2024-11-17 14:36:19.697263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e49b0 00:26:30.645 [2024-11-17 14:36:19.698839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.645 [2024-11-17 14:36:19.698858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.645 [2024-11-17 14:36:19.703988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e8088 00:26:30.645 [2024-11-17 14:36:19.704843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.645 [2024-11-17 14:36:19.704862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:30.645 [2024-11-17 14:36:19.713309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f1868 00:26:30.645 [2024-11-17 14:36:19.714170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.645 [2024-11-17 14:36:19.714188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:30.645 [2024-11-17 14:36:19.724575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e1b48 00:26:30.645 [2024-11-17 14:36:19.725914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.645 [2024-11-17 14:36:19.725933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:30.645 [2024-11-17 14:36:19.734208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e7c50 00:26:30.645 [2024-11-17 14:36:19.735685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.645 [2024-11-17 14:36:19.735704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:30.645 [2024-11-17 14:36:19.743828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f1868 00:26:30.645 [2024-11-17 14:36:19.745414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.645 [2024-11-17 14:36:19.745433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:30.645 [2024-11-17 14:36:19.750563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e0ea0 00:26:30.645 [2024-11-17 14:36:19.751432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.645 [2024-11-17 14:36:19.751451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:30.645 [2024-11-17 14:36:19.762118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f2948 00:26:30.645 [2024-11-17 14:36:19.763476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.645 [2024-11-17 14:36:19.763494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:30.645 [2024-11-17 14:36:19.771453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e1b48 00:26:30.645 [2024-11-17 14:36:19.772822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.645 [2024-11-17 14:36:19.772841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:30.646 [2024-11-17 14:36:19.780507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e4de8 00:26:30.646 [2024-11-17 14:36:19.781871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.646 [2024-11-17 14:36:19.781890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:30.646 [2024-11-17 14:36:19.787254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e2c28 00:26:30.646 [2024-11-17 14:36:19.787907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.646 [2024-11-17 14:36:19.787926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:30.646 [2024-11-17 14:36:19.796888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f96f8 00:26:30.646 [2024-11-17 14:36:19.797645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.646 [2024-11-17 14:36:19.797663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:30.646 [2024-11-17 14:36:19.806541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fb8b8 00:26:30.646 [2024-11-17 14:36:19.807429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.646 [2024-11-17 14:36:19.807448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:30.646 27329.00 IOPS, 106.75 MiB/s [2024-11-17T13:36:19.871Z] [2024-11-17 14:36:19.816381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fcdd0 00:26:30.646 [2024-11-17 14:36:19.816939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.646 [2024-11-17 14:36:19.816959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:30.646 [2024-11-17 14:36:19.824812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fb8b8 00:26:30.646 [2024-11-17 14:36:19.825460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.646 [2024-11-17 14:36:19.825479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:30.646 [2024-11-17 14:36:19.836409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e2c28 00:26:30.646 [2024-11-17 14:36:19.837485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.646 [2024-11-17 14:36:19.837504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:30.646 [2024-11-17 14:36:19.844593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f3e60 00:26:30.646 [2024-11-17 14:36:19.845155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.646 [2024-11-17 14:36:19.845174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:30.646 [2024-11-17 14:36:19.854074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e38d0 00:26:30.646 [2024-11-17 14:36:19.854879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.646 [2024-11-17 14:36:19.854898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:30.646 [2024-11-17 14:36:19.864723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ec840 00:26:30.905 [2024-11-17 14:36:19.866007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.905 [2024-11-17 14:36:19.866026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:30.905 [2024-11-17 14:36:19.873370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f96f8 00:26:30.905 [2024-11-17 14:36:19.874643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.905 [2024-11-17 14:36:19.874663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:30.905 [2024-11-17 14:36:19.881936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e49b0 00:26:30.905 [2024-11-17 14:36:19.882616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.905 [2024-11-17 14:36:19.882634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:30.905 [2024-11-17 14:36:19.890537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e5220 00:26:30.905 [2024-11-17 14:36:19.891175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.905 [2024-11-17 14:36:19.891196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.905 [2024-11-17 14:36:19.901951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ddc00 00:26:30.905 [2024-11-17 14:36:19.903210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.905 [2024-11-17 14:36:19.903230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:30.905 [2024-11-17 14:36:19.910711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e5220 00:26:30.905 [2024-11-17 14:36:19.911690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.905 [2024-11-17 14:36:19.911710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:30.905 [2024-11-17 14:36:19.919916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fcdd0 00:26:30.905 [2024-11-17 14:36:19.920726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.905 [2024-11-17 14:36:19.920745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:30.905 [2024-11-17 14:36:19.929211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e2c28 00:26:30.905 [2024-11-17 14:36:19.930035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.905 [2024-11-17 14:36:19.930055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:30.905 [2024-11-17 14:36:19.938449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e2c28 00:26:30.905 [2024-11-17 14:36:19.939252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.905 [2024-11-17 14:36:19.939271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:30.905 [2024-11-17 14:36:19.948082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f5be8 00:26:30.905 [2024-11-17 14:36:19.949131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.905 [2024-11-17 14:36:19.949150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:30.905 [2024-11-17 14:36:19.957233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f9f68 00:26:30.905 [2024-11-17 14:36:19.958267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.905 [2024-11-17 14:36:19.958285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:30.905 [2024-11-17 14:36:19.965176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e4de8 00:26:30.905 [2024-11-17 14:36:19.965744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.905 [2024-11-17 14:36:19.965763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:30.905 [2024-11-17 14:36:19.974326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e38d0 00:26:30.905 [2024-11-17 14:36:19.974884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.905 [2024-11-17 14:36:19.974904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:30.905 [2024-11-17 14:36:19.985322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f5be8 00:26:30.905 [2024-11-17 14:36:19.986447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.905 [2024-11-17 14:36:19.986466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:30.905 [2024-11-17 14:36:19.993048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e7818 00:26:30.905 [2024-11-17 14:36:19.993612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.905 [2024-11-17 14:36:19.993631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:30.905 [2024-11-17 14:36:20.002790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fc560 00:26:30.905 [2024-11-17 14:36:20.003723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.905 [2024-11-17 14:36:20.003742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:30.905 [2024-11-17 14:36:20.012426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ea680 00:26:30.905 [2024-11-17 14:36:20.012884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.905 [2024-11-17 14:36:20.012904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:30.905 [2024-11-17 14:36:20.023109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166de038 00:26:30.905 [2024-11-17 14:36:20.023838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.905 [2024-11-17 14:36:20.023861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:30.905 [2024-11-17 14:36:20.033045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ed920 00:26:30.905 [2024-11-17 14:36:20.033767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.905 [2024-11-17 14:36:20.033790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:30.905 [2024-11-17 14:36:20.042849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ed920 00:26:30.905 [2024-11-17 14:36:20.043573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.905 [2024-11-17 14:36:20.043596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:30.906 [2024-11-17 14:36:20.052875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e12d8 00:26:30.906 [2024-11-17 14:36:20.053929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.906 [2024-11-17 14:36:20.053949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:30.906 [2024-11-17 14:36:20.062848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f7538 00:26:30.906 [2024-11-17 14:36:20.064025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.906 [2024-11-17 14:36:20.064046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:30.906 [2024-11-17 14:36:20.071878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f20d8 00:26:30.906 [2024-11-17 14:36:20.072785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.906 [2024-11-17 14:36:20.072806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:30.906 [2024-11-17 14:36:20.081425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e49b0 00:26:30.906 [2024-11-17 14:36:20.082296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.906 [2024-11-17 14:36:20.082316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:30.906 [2024-11-17 14:36:20.092657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fc998 00:26:30.906 [2024-11-17 14:36:20.093288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.906 [2024-11-17 14:36:20.093312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:30.906 [2024-11-17 14:36:20.102431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e5658 00:26:30.906 [2024-11-17 14:36:20.103359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.906 [2024-11-17 14:36:20.103379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:30.906 [2024-11-17 14:36:20.112405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ddc00 00:26:30.906 [2024-11-17 14:36:20.113492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.906 [2024-11-17 14:36:20.113513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:30.906 [2024-11-17 14:36:20.124088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fef90 00:26:31.164 [2024-11-17 14:36:20.125658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.125678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.130780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f4b08 00:26:31.164 [2024-11-17 14:36:20.131501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.131521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.140950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e6b70 00:26:31.164 [2024-11-17 14:36:20.141917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.141940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.150635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e27f0 00:26:31.164 [2024-11-17 14:36:20.151145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.151165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.160560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e38d0 00:26:31.164 [2024-11-17 14:36:20.161197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.161216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.169861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f4298 00:26:31.164 [2024-11-17 14:36:20.170802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.170822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.179310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fd640 00:26:31.164 [2024-11-17 14:36:20.180183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.180202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.189518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f8a50 00:26:31.164 [2024-11-17 14:36:20.190607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.190627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.199105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e5a90 00:26:31.164 [2024-11-17 14:36:20.200197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.200217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.210531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e8088 00:26:31.164 [2024-11-17 14:36:20.212134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.212154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.217347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ec408 00:26:31.164 [2024-11-17 14:36:20.218147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.218166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.227417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ec408 00:26:31.164 [2024-11-17 14:36:20.228327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.228346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.236262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f1868 00:26:31.164 [2024-11-17 14:36:20.237128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.237147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.245906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fdeb0 00:26:31.164 [2024-11-17 14:36:20.246797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.246817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.254919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166feb58 00:26:31.164 [2024-11-17 14:36:20.255685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.255704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.266193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e9168 00:26:31.164 [2024-11-17 14:36:20.267360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.267379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.274395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166eaef0 00:26:31.164 [2024-11-17 14:36:20.275062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.275082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.284384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e6300 00:26:31.164 [2024-11-17 14:36:20.285396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.285416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.296128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e4de8 00:26:31.164 [2024-11-17 14:36:20.297632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.297651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.303040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f5be8 00:26:31.164 [2024-11-17 14:36:20.303730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.303749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.313101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f5be8 00:26:31.164 [2024-11-17 14:36:20.313796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.313816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.324870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e1f80 00:26:31.164 [2024-11-17 14:36:20.326404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.326424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.331683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f0788 00:26:31.164 [2024-11-17 14:36:20.332375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.332395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.341731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f0788 00:26:31.164 [2024-11-17 14:36:20.342438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.342459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.351688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f92c0 00:26:31.164 [2024-11-17 14:36:20.352247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.352267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.360966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e27f0 00:26:31.164 [2024-11-17 14:36:20.361809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.361828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.370684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f6890 00:26:31.164 [2024-11-17 14:36:20.371600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.371619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:31.164 [2024-11-17 14:36:20.382612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f4298 00:26:31.164 [2024-11-17 14:36:20.384049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.164 [2024-11-17 14:36:20.384069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.392551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f6020 00:26:31.423 [2024-11-17 14:36:20.394013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.394035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.400803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f0350 00:26:31.423 [2024-11-17 14:36:20.401777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.401809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.409642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e6300 00:26:31.423 [2024-11-17 14:36:20.410701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.410720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.419227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e88f8 00:26:31.423 [2024-11-17 14:36:20.419819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.419839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.428495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e9168 00:26:31.423 [2024-11-17 14:36:20.429376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.429397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.437857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fcdd0 00:26:31.423 [2024-11-17 14:36:20.438671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.438691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.447772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f1ca0 00:26:31.423 [2024-11-17 14:36:20.448738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.448757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.457622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e4140 00:26:31.423 [2024-11-17 14:36:20.458332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.458358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.466860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f92c0 00:26:31.423 [2024-11-17 14:36:20.467932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.467952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.476312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ed0b0 00:26:31.423 [2024-11-17 14:36:20.477299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.477318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.486488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e49b0 00:26:31.423 [2024-11-17 14:36:20.487678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.487697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.496085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e5ec8 00:26:31.423 [2024-11-17 14:36:20.497279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.497298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.505854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f7100 00:26:31.423 [2024-11-17 14:36:20.507045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.507064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.514715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fcdd0 00:26:31.423 [2024-11-17 14:36:20.515811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.515830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.523826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e4578 00:26:31.423 [2024-11-17 14:36:20.524893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.524913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.533756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f2510 00:26:31.423 [2024-11-17 14:36:20.534945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.534965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.543343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e3d08 00:26:31.423 [2024-11-17 14:36:20.544542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.544562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.552691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f7100 00:26:31.423 [2024-11-17 14:36:20.553419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.553439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.561626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f92c0 00:26:31.423 [2024-11-17 14:36:20.562936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.562955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.569628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f0ff8 00:26:31.423 [2024-11-17 14:36:20.570316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.570335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.581175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166eee38 00:26:31.423 [2024-11-17 14:36:20.582374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.582393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.589861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f7da8 00:26:31.423 [2024-11-17 14:36:20.590782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.590802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.599250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ebfd0 00:26:31.423 [2024-11-17 14:36:20.600256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.600275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.610609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e0ea0 00:26:31.423 [2024-11-17 14:36:20.612072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.612092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.617494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fc998 00:26:31.423 [2024-11-17 14:36:20.618272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.618291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.628866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166edd58 00:26:31.423 [2024-11-17 14:36:20.630116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.630135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:31.423 [2024-11-17 14:36:20.637515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e9168 00:26:31.423 [2024-11-17 14:36:20.638518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.423 [2024-11-17 14:36:20.638541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:31.684 [2024-11-17 14:36:20.647236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fc998 00:26:31.684 [2024-11-17 14:36:20.648143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.684 [2024-11-17 14:36:20.648162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:31.684 [2024-11-17 14:36:20.656658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166dece0 00:26:31.684 [2024-11-17 14:36:20.657584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.684 [2024-11-17 14:36:20.657603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:31.684 [2024-11-17 14:36:20.665876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e5658 00:26:31.684 [2024-11-17 14:36:20.666797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.684 [2024-11-17 14:36:20.666817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:31.684 [2024-11-17 14:36:20.675081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e73e0 00:26:31.684 [2024-11-17 14:36:20.676006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.684 [2024-11-17 14:36:20.676025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:31.684 [2024-11-17 14:36:20.684317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fc128 00:26:31.684 [2024-11-17 14:36:20.685237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.684 [2024-11-17 14:36:20.685256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:31.684 [2024-11-17 14:36:20.693506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fb048 00:26:31.684 [2024-11-17 14:36:20.694426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.684 [2024-11-17 14:36:20.694445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:31.684 [2024-11-17 14:36:20.702078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e4de8 00:26:31.684 [2024-11-17 14:36:20.702955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.684 [2024-11-17 14:36:20.702973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:31.684 [2024-11-17 14:36:20.712335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f2d80 00:26:31.684 [2024-11-17 14:36:20.713400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.684 [2024-11-17 14:36:20.713419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:31.684 [2024-11-17 14:36:20.721548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ec840 00:26:31.684 [2024-11-17 14:36:20.722591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.684 [2024-11-17 14:36:20.722620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:31.684 [2024-11-17 14:36:20.730824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e01f8 00:26:31.684 [2024-11-17 14:36:20.731840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.684 [2024-11-17 14:36:20.731859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:31.684 [2024-11-17 14:36:20.740045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ec408 00:26:31.684 [2024-11-17 14:36:20.741059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.684 [2024-11-17 14:36:20.741077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:31.684 [2024-11-17 14:36:20.749261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f6020 00:26:31.684 [2024-11-17 14:36:20.750278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.684 [2024-11-17 14:36:20.750297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:31.684 [2024-11-17 14:36:20.758495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166f7100 00:26:31.684 [2024-11-17 14:36:20.759542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.684 [2024-11-17 14:36:20.759562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:31.684 [2024-11-17 14:36:20.767765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fa7d8 00:26:31.684 [2024-11-17 14:36:20.768813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.684 [2024-11-17 14:36:20.768833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:31.684 [2024-11-17 14:36:20.776978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166ebb98 00:26:31.684 [2024-11-17 14:36:20.777998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.685 [2024-11-17 14:36:20.778017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:31.685 [2024-11-17 14:36:20.786221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e1b48 00:26:31.685 [2024-11-17 14:36:20.787262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.685 [2024-11-17 14:36:20.787281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:31.685 [2024-11-17 14:36:20.795439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e3060 00:26:31.685 [2024-11-17 14:36:20.796450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.685 [2024-11-17 14:36:20.796469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:31.685 [2024-11-17 14:36:20.804664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166e0ea0 00:26:31.685 [2024-11-17 14:36:20.805672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.685 [2024-11-17 14:36:20.805690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:31.685 [2024-11-17 14:36:20.813869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164b650) with pdu=0x2000166fc560 00:26:31.685 [2024-11-17 14:36:20.814886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.685 [2024-11-17 14:36:20.814905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:31.685 27148.00 IOPS, 106.05 MiB/s 00:26:31.685 Latency(us) 00:26:31.685 [2024-11-17T13:36:20.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.685 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:31.685 nvme0n1 : 2.01 27167.51 106.12 0.00 0.00 4705.37 1823.61 12822.26 00:26:31.685 [2024-11-17T13:36:20.910Z] =================================================================================================================== 00:26:31.685 [2024-11-17T13:36:20.910Z] Total : 27167.51 106.12 0.00 0.00 4705.37 1823.61 12822.26 00:26:31.685 { 00:26:31.685 "results": [ 00:26:31.685 { 00:26:31.685 "job": "nvme0n1", 00:26:31.685 "core_mask": "0x2", 00:26:31.685 "workload": "randwrite", 00:26:31.685 "status": "finished", 00:26:31.685 "queue_depth": 128, 00:26:31.685 "io_size": 4096, 00:26:31.685 "runtime": 2.005042, 00:26:31.685 "iops": 27167.510705511406, 00:26:31.685 "mibps": 106.12308869340393, 00:26:31.685 "io_failed": 0, 00:26:31.685 "io_timeout": 0, 00:26:31.685 "avg_latency_us": 4705.370200821163, 00:26:31.685 "min_latency_us": 1823.6104347826088, 00:26:31.685 "max_latency_us": 12822.260869565218 00:26:31.685 } 00:26:31.685 ], 00:26:31.685 "core_count": 1 00:26:31.685 } 00:26:31.685 14:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:31.685 14:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:31.685 14:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:31.685 | .driver_specific 00:26:31.685 | .nvme_error 00:26:31.685 | .status_code 00:26:31.685 | .command_transient_transport_error' 00:26:31.685 14:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:31.945 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 213 > 0 )) 00:26:31.945 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1611405 00:26:31.945 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1611405 ']' 00:26:31.945 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1611405 00:26:31.945 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:31.945 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:31.945 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1611405 00:26:31.945 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:31.945 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:31.945 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1611405' 00:26:31.945 killing process with pid 1611405 00:26:31.945 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1611405 00:26:31.945 Received shutdown signal, test time was about 2.000000 seconds 00:26:31.945 00:26:31.945 Latency(us) 00:26:31.945 [2024-11-17T13:36:21.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.945 [2024-11-17T13:36:21.170Z] =================================================================================================================== 00:26:31.945 [2024-11-17T13:36:21.170Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:31.945 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1611405 00:26:32.204 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:32.204 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:32.204 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:32.204 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:32.204 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:32.204 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1611876 00:26:32.204 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1611876 /var/tmp/bperf.sock 00:26:32.204 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:32.204 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1611876 ']' 00:26:32.204 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:32.204 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.204 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:32.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:32.204 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.204 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.204 [2024-11-17 14:36:21.306256] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:26:32.204 [2024-11-17 14:36:21.306302] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611876 ] 00:26:32.204 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:32.204 Zero copy mechanism will not be used. 00:26:32.204 [2024-11-17 14:36:21.381861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.204 [2024-11-17 14:36:21.419292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.464 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.464 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:32.464 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:32.464 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:32.722 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:32.723 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.723 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.723 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.723 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.723 14:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.982 nvme0n1 00:26:32.982 14:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:32.982 14:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.982 14:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.982 14:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.982 14:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:32.982 14:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:33.242 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:33.242 Zero copy mechanism will not be used. 00:26:33.242 Running I/O for 2 seconds... 00:26:33.242 [2024-11-17 14:36:22.261367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.242 [2024-11-17 14:36:22.261451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.242 [2024-11-17 14:36:22.261481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.242 [2024-11-17 14:36:22.267027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.242 [2024-11-17 14:36:22.267094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.242 [2024-11-17 14:36:22.267119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.242 [2024-11-17 14:36:22.271572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.242 [2024-11-17 14:36:22.271641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.242 [2024-11-17 14:36:22.271664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.242 [2024-11-17 14:36:22.276110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.242 [2024-11-17 14:36:22.276166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.242 [2024-11-17 14:36:22.276186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.242 [2024-11-17 14:36:22.280827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.242 [2024-11-17 14:36:22.280882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.242 [2024-11-17 14:36:22.280901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.242 [2024-11-17 14:36:22.285584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.242 [2024-11-17 14:36:22.285645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.242 [2024-11-17 14:36:22.285668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.242 [2024-11-17 14:36:22.290801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.242 [2024-11-17 14:36:22.290875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.242 [2024-11-17 14:36:22.290895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.242 [2024-11-17 14:36:22.296193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.242 [2024-11-17 14:36:22.296254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.242 [2024-11-17 14:36:22.296273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.242 [2024-11-17 14:36:22.301602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.242 [2024-11-17 14:36:22.301676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.242 [2024-11-17 14:36:22.301696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.242 [2024-11-17 14:36:22.306310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.242 [2024-11-17 14:36:22.306380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.242 [2024-11-17 14:36:22.306399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.242 [2024-11-17 14:36:22.310866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.242 [2024-11-17 14:36:22.310921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.242 [2024-11-17 14:36:22.310940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.242 [2024-11-17 14:36:22.315480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.242 [2024-11-17 14:36:22.315544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.242 [2024-11-17 14:36:22.315563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.242 [2024-11-17 14:36:22.319909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.242 [2024-11-17 14:36:22.319969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.242 [2024-11-17 14:36:22.319988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.242 [2024-11-17 14:36:22.324401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.242 [2024-11-17 14:36:22.324458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.242 [2024-11-17 14:36:22.324477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.242 [2024-11-17 14:36:22.329026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.242 [2024-11-17 14:36:22.329083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.242 [2024-11-17 14:36:22.329103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.242 [2024-11-17 14:36:22.334196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.242 [2024-11-17 14:36:22.334251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.242 [2024-11-17 14:36:22.334269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.242 [2024-11-17 14:36:22.339624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.242 [2024-11-17 14:36:22.339680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.242 [2024-11-17 14:36:22.339699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.242 [2024-11-17 14:36:22.344470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.242 [2024-11-17 14:36:22.344570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.344588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.349068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.349146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.349165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.353711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.353811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.353829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.359577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.359692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.359711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.364081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.364149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.364167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.368505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.368565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.368584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.372790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.372881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.372900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.377134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.377201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.377220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.381492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.381554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.381573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.385816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.385874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.385892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.390111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.390163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.390182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.394407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.394464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.394483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.398879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.398931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.398950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.403618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.403685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.403704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.408799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.408854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.408876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.414470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.414524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.414543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.419390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.419444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.419463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.424103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.424183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.424202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.428639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.428694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.428713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.433447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.433518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.433536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.437825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.437917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.437935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.442143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.442214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.442233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.446392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.446460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.446479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.450659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.450726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.450745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.454940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.455002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.455021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.243 [2024-11-17 14:36:22.459288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.243 [2024-11-17 14:36:22.459346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.243 [2024-11-17 14:36:22.459372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.463611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.463670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.504 [2024-11-17 14:36:22.463689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.467918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.467978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.504 [2024-11-17 14:36:22.467997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.472224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.472274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.504 [2024-11-17 14:36:22.472293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.476484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.476546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.504 [2024-11-17 14:36:22.476566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.480734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.480797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.504 [2024-11-17 14:36:22.480816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.484986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.485049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.504 [2024-11-17 14:36:22.485068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.489227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.489279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.504 [2024-11-17 14:36:22.489297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.493450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.493507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.504 [2024-11-17 14:36:22.493525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.497741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.497795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.504 [2024-11-17 14:36:22.497815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.501984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.502043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.504 [2024-11-17 14:36:22.502062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.506211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.506269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.504 [2024-11-17 14:36:22.506288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.510461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.510519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.504 [2024-11-17 14:36:22.510538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.514672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.514735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.504 [2024-11-17 14:36:22.514754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.518978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.519031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.504 [2024-11-17 14:36:22.519050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.523340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.523415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.504 [2024-11-17 14:36:22.523441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.527738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.527790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.504 [2024-11-17 14:36:22.527809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.532084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.532143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.504 [2024-11-17 14:36:22.532162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.536392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.536449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.504 [2024-11-17 14:36:22.536467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.540625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.540677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.504 [2024-11-17 14:36:22.540695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.544852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.544909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.504 [2024-11-17 14:36:22.544927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.549085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.549140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.504 [2024-11-17 14:36:22.549158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.504 [2024-11-17 14:36:22.553763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.504 [2024-11-17 14:36:22.553832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.553852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.558794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.558918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.558937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.563921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.563998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.564017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.569096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.569148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.569167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.575184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.575267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.575285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.580053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.580140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.580159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.584860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.584915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.584933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.589509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.589580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.589598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.593915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.593979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.593998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.598418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.598477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.598495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.603073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.603171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.603189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.607749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.607831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.607850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.612422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.612508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.612526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.616947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.617010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.617029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.621464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.621533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.621552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.626287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.626376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.626395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.631345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.631408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.631427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.636524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.636576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.636594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.641847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.641973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.641992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.647257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.647311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.647334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.652968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.653113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.653133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.658997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.659143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.659162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.665847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.665990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.666009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.673253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.673315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.673333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.679922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.680000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.680019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.686419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.686475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.686494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.691122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.691198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.505 [2024-11-17 14:36:22.691217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.505 [2024-11-17 14:36:22.695655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.505 [2024-11-17 14:36:22.695728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.506 [2024-11-17 14:36:22.695747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.506 [2024-11-17 14:36:22.700066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.506 [2024-11-17 14:36:22.700126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.506 [2024-11-17 14:36:22.700145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.506 [2024-11-17 14:36:22.704445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.506 [2024-11-17 14:36:22.704522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.506 [2024-11-17 14:36:22.704541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.506 [2024-11-17 14:36:22.708807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.506 [2024-11-17 14:36:22.708862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.506 [2024-11-17 14:36:22.708881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.506 [2024-11-17 14:36:22.713160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.506 [2024-11-17 14:36:22.713214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.506 [2024-11-17 14:36:22.713233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.506 [2024-11-17 14:36:22.717482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.506 [2024-11-17 14:36:22.717548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.506 [2024-11-17 14:36:22.717567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.506 [2024-11-17 14:36:22.721935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.506 [2024-11-17 14:36:22.721990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.506 [2024-11-17 14:36:22.722009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.766 [2024-11-17 14:36:22.726271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.766 [2024-11-17 14:36:22.726332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.766 [2024-11-17 14:36:22.726357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.766 [2024-11-17 14:36:22.730657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.766 [2024-11-17 14:36:22.730766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.766 [2024-11-17 14:36:22.730785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.766 [2024-11-17 14:36:22.735369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.766 [2024-11-17 14:36:22.735424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.766 [2024-11-17 14:36:22.735442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.766 [2024-11-17 14:36:22.739777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.766 [2024-11-17 14:36:22.739840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.766 [2024-11-17 14:36:22.739859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.766 [2024-11-17 14:36:22.744078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.744143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.744162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.748446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.748503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.748522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.752788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.752842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.752861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.757061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.757118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.757136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.761425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.761488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.761506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.765703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.765759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.765778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.770370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.770427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.770446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.774906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.774970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.774992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.779445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.779513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.779532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.784138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.784211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.784230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.788531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.788588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.788606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.793086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.793149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.793168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.797495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.797562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.797580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.802013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.802075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.802094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.806417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.806474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.806493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.810803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.810872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.810891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.815153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.815213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.815231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.819471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.819530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.819548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.824151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.824237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.824256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.828790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.828856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.828875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.833201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.833261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.833281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.837901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.837964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.837983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.843082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.843134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.843153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.848229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.848285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.848304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.853291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.853380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.853399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.858031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.767 [2024-11-17 14:36:22.858089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.767 [2024-11-17 14:36:22.858109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.767 [2024-11-17 14:36:22.862814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.862875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.862894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.867858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.867986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.868004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.873256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.873312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.873331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.879001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.879055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.879074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.884187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.884236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.884255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.889233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.889301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.889319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.894348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.894411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.894429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.899756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.899819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.899842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.905335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.905400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.905419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.910335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.910488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.910507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.915333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.915429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.915448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.921014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.921124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.921143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.926125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.926176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.926195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.931773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.931844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.931863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.937044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.937105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.937124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.942007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.942083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.942104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.947179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.947249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.947268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.952184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.952250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.952269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.957331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.957405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.957423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.962681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.962737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.962755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.968112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.968165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.968184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.973224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.973305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.973324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.978497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.978562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.978581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.768 [2024-11-17 14:36:22.984244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:33.768 [2024-11-17 14:36:22.984305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.768 [2024-11-17 14:36:22.984324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:22.989378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:22.989433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:22.989451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:22.994939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:22.995031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:22.995050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.000193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.000258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.000276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.005122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.005200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.005219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.009962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.010037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.010055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.015110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.015204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.015223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.020453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.020508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.020528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.025262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.025372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.025391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.030182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.030256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.030275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.034729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.034799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.034823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.039186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.039300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.039318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.043817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.043883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.043902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.048488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.048598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.048617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.053222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.053279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.053297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.057733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.057852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.057871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.062685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.062752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.062770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.067364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.067422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.067441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.072802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.072858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.072876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.078645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.078729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.078747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.083603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.083664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.083683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.088341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.088406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.088424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.092921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.092975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.092993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.097509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.097628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.097647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.101965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.102024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.102043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.030 [2024-11-17 14:36:23.106403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.030 [2024-11-17 14:36:23.106501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-17 14:36:23.106520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.111163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.111218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.111237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.115804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.115858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.115876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.120165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.120262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.120280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.124877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.124996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.125014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.129285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.129390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.129409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.134110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.134172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.134191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.138783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.138836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.138854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.143493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.143550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.143569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.147878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.147937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.147956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.152557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.152616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.152634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.157619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.157675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.157697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.162880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.162949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.162968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.167752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.167812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.167830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.172395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.172462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.172480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.176904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.176997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.177016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.181644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.181702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.181720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.186196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.186293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.186311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.190773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.190825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.190843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.195367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.195423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.195442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.199966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.200082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.200101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.204528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.204629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.204647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.209212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.209320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.209338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.213983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.214083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.214101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.218320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.218398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.218417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.223141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.223231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.223250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.227944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.228009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.228028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.031 [2024-11-17 14:36:23.232389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.031 [2024-11-17 14:36:23.232448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.031 [2024-11-17 14:36:23.232467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.032 [2024-11-17 14:36:23.236933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.032 [2024-11-17 14:36:23.236994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.032 [2024-11-17 14:36:23.237012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.032 [2024-11-17 14:36:23.241446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.032 [2024-11-17 14:36:23.241516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.032 [2024-11-17 14:36:23.241534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.032 [2024-11-17 14:36:23.245828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.032 [2024-11-17 14:36:23.245900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.032 [2024-11-17 14:36:23.245920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.291 [2024-11-17 14:36:23.250477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.291 [2024-11-17 14:36:23.250549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.291 [2024-11-17 14:36:23.250569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.291 [2024-11-17 14:36:23.255160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.291 [2024-11-17 14:36:23.255213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.291 [2024-11-17 14:36:23.255232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.291 [2024-11-17 14:36:23.259820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.291 [2024-11-17 14:36:23.261093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.291 [2024-11-17 14:36:23.261114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.291 6479.00 IOPS, 809.88 MiB/s [2024-11-17T13:36:23.516Z] [2024-11-17 14:36:23.265275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.291 [2024-11-17 14:36:23.265516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.291 [2024-11-17 14:36:23.265536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.291 [2024-11-17 14:36:23.269015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.291 [2024-11-17 14:36:23.269237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.291 [2024-11-17 14:36:23.269257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.291 [2024-11-17 14:36:23.272729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.291 [2024-11-17 14:36:23.272955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.291 [2024-11-17 14:36:23.272975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.276477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.276709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.276730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.280255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.280463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.280483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.284032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.284252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.284273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.287986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.288199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.288218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.292440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.292647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.292667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.297090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.297319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.297339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.301152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.301381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.301400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.305291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.305502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.305521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.309471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.309692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.309711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.313316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.313527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.313547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.317292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.317503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.317531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.321751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.321954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.321974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.326491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.326700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.326718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.330932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.331124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.331142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.335591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.335775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.335793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.340158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.340328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.340346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.344642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.344836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.344854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.349370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.349545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.349567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.353814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.353984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.354004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.358159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.358335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.358383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.363081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.363265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.363283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.367496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.367641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.367672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.372222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.372412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.372430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.377054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.377234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.292 [2024-11-17 14:36:23.377252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.292 [2024-11-17 14:36:23.381058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.292 [2024-11-17 14:36:23.381231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.381249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.384969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.385149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.385167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.388862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.389048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.389072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.392710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.392900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.392919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.396528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.396722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.396742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.400369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.400546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.400566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.404171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.404357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.404376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.408253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.408479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.408499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.412970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.413146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.413165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.416851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.417042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.417061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.420650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.420829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.420849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.424445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.424634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.424652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.428228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.428422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.428440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.432037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.432226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.432244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.435935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.436108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.436125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.439939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.440132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.440151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.444156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.444333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.444356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.448531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.448703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.448723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.453251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.453440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.453459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.457717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.457839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.457862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.462510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.462670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.462688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.466657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.466847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.466867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.470685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.470864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.470884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.474711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.474905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.474925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.478717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.478908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.478927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.482887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.483095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.483115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.293 [2024-11-17 14:36:23.486886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.293 [2024-11-17 14:36:23.487081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.293 [2024-11-17 14:36:23.487101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.294 [2024-11-17 14:36:23.491002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.294 [2024-11-17 14:36:23.491193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.294 [2024-11-17 14:36:23.491211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.294 [2024-11-17 14:36:23.495024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.294 [2024-11-17 14:36:23.495215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.294 [2024-11-17 14:36:23.495235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.294 [2024-11-17 14:36:23.499069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.294 [2024-11-17 14:36:23.499248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.294 [2024-11-17 14:36:23.499266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.294 [2024-11-17 14:36:23.503046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.294 [2024-11-17 14:36:23.503229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.294 [2024-11-17 14:36:23.503248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.294 [2024-11-17 14:36:23.507114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.294 [2024-11-17 14:36:23.507295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.294 [2024-11-17 14:36:23.507314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.554 [2024-11-17 14:36:23.511069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.554 [2024-11-17 14:36:23.511262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.554 [2024-11-17 14:36:23.511281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.554 [2024-11-17 14:36:23.515125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.554 [2024-11-17 14:36:23.515284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.554 [2024-11-17 14:36:23.515303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.554 [2024-11-17 14:36:23.519422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.554 [2024-11-17 14:36:23.519576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.554 [2024-11-17 14:36:23.519594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.554 [2024-11-17 14:36:23.524270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.554 [2024-11-17 14:36:23.524419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.554 [2024-11-17 14:36:23.524438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.554 [2024-11-17 14:36:23.528416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.554 [2024-11-17 14:36:23.528613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.554 [2024-11-17 14:36:23.528632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.554 [2024-11-17 14:36:23.532486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.554 [2024-11-17 14:36:23.532675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.554 [2024-11-17 14:36:23.532694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.554 [2024-11-17 14:36:23.536587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.554 [2024-11-17 14:36:23.536784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.554 [2024-11-17 14:36:23.536802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.554 [2024-11-17 14:36:23.540570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.554 [2024-11-17 14:36:23.540760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.554 [2024-11-17 14:36:23.540780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.554 [2024-11-17 14:36:23.544580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.554 [2024-11-17 14:36:23.544759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.554 [2024-11-17 14:36:23.544776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.554 [2024-11-17 14:36:23.548538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.554 [2024-11-17 14:36:23.548714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.554 [2024-11-17 14:36:23.548733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.554 [2024-11-17 14:36:23.552562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.554 [2024-11-17 14:36:23.552737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.554 [2024-11-17 14:36:23.552756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.554 [2024-11-17 14:36:23.556626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.554 [2024-11-17 14:36:23.556809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.554 [2024-11-17 14:36:23.556828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.554 [2024-11-17 14:36:23.560763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.554 [2024-11-17 14:36:23.560933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.560951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.564743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.564924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.564947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.568729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.568911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.568929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.572720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.572908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.572926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.576772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.576959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.576978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.580767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.580953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.580971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.584758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.584929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.584948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.588769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.588947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.588965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.592754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.592928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.592946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.596775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.596962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.596980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.600756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.600945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.600967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.605012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.605168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.605187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.609238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.609410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.609428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.613227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.613420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.613439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.617227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.617409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.617427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.621177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.621366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.621385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.625190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.625376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.625394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.629176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.629365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.629384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.633151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.633346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.633371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.637176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.637377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.637396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.641490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.641683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.641705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.646741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.646965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.646983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.652694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.652890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.652909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.657665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.657893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.657913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.663615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.663829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.663849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.668999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.669165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.669184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.674818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.555 [2024-11-17 14:36:23.675016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.555 [2024-11-17 14:36:23.675043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.555 [2024-11-17 14:36:23.681704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.681889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.681911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.556 [2024-11-17 14:36:23.686973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.687184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.687204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.556 [2024-11-17 14:36:23.691949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.692130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.692149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.556 [2024-11-17 14:36:23.696144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.696348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.696372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.556 [2024-11-17 14:36:23.700969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.701268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.701288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.556 [2024-11-17 14:36:23.706280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.706607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.706627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.556 [2024-11-17 14:36:23.710546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.710756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.710774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.556 [2024-11-17 14:36:23.714968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.715157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.715178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.556 [2024-11-17 14:36:23.719141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.719344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.719369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.556 [2024-11-17 14:36:23.723231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.723440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.723462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.556 [2024-11-17 14:36:23.727237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.727457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.727475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.556 [2024-11-17 14:36:23.731340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.731525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.731543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.556 [2024-11-17 14:36:23.735491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.735715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.735734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.556 [2024-11-17 14:36:23.739598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.739765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.739783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.556 [2024-11-17 14:36:23.743843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.744038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.744057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.556 [2024-11-17 14:36:23.747891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.748113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.748133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.556 [2024-11-17 14:36:23.751923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.752090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.752108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.556 [2024-11-17 14:36:23.756066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.756238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.756256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.556 [2024-11-17 14:36:23.760243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.760500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.760521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.556 [2024-11-17 14:36:23.764192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.764391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.764410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.556 [2024-11-17 14:36:23.768235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.768443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.768462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.556 [2024-11-17 14:36:23.773147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.556 [2024-11-17 14:36:23.773364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.556 [2024-11-17 14:36:23.773384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.817 [2024-11-17 14:36:23.778302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.817 [2024-11-17 14:36:23.778494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-11-17 14:36:23.778513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.817 [2024-11-17 14:36:23.783562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.817 [2024-11-17 14:36:23.783843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-11-17 14:36:23.783864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.817 [2024-11-17 14:36:23.789674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.817 [2024-11-17 14:36:23.789861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-11-17 14:36:23.789881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.817 [2024-11-17 14:36:23.794415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.817 [2024-11-17 14:36:23.794571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-11-17 14:36:23.794590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.817 [2024-11-17 14:36:23.799771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.817 [2024-11-17 14:36:23.799859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-11-17 14:36:23.799881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.817 [2024-11-17 14:36:23.804493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.817 [2024-11-17 14:36:23.804657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-11-17 14:36:23.804676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.817 [2024-11-17 14:36:23.808669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.817 [2024-11-17 14:36:23.808860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-11-17 14:36:23.808878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.817 [2024-11-17 14:36:23.812704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.817 [2024-11-17 14:36:23.812881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-11-17 14:36:23.812902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.817 [2024-11-17 14:36:23.816633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.817 [2024-11-17 14:36:23.816812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-11-17 14:36:23.816830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.817 [2024-11-17 14:36:23.820801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.817 [2024-11-17 14:36:23.820974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-11-17 14:36:23.820993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.817 [2024-11-17 14:36:23.825881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.817 [2024-11-17 14:36:23.826027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-11-17 14:36:23.826045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.817 [2024-11-17 14:36:23.830154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.817 [2024-11-17 14:36:23.830330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-11-17 14:36:23.830349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.817 [2024-11-17 14:36:23.834209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.817 [2024-11-17 14:36:23.834386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-11-17 14:36:23.834404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.817 [2024-11-17 14:36:23.838289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.817 [2024-11-17 14:36:23.838466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-11-17 14:36:23.838487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.817 [2024-11-17 14:36:23.842349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.817 [2024-11-17 14:36:23.842545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-11-17 14:36:23.842563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.817 [2024-11-17 14:36:23.846371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.817 [2024-11-17 14:36:23.846525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-11-17 14:36:23.846543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.817 [2024-11-17 14:36:23.850416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.817 [2024-11-17 14:36:23.850584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-11-17 14:36:23.850602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.817 [2024-11-17 14:36:23.854383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.817 [2024-11-17 14:36:23.854558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-11-17 14:36:23.854576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.817 [2024-11-17 14:36:23.858264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.817 [2024-11-17 14:36:23.858459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-11-17 14:36:23.858477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.817 [2024-11-17 14:36:23.862310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.817 [2024-11-17 14:36:23.862501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-11-17 14:36:23.862519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.866665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.866831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.866849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.871376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.871580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.871599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.875472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.875637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.875655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.879407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.879588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.879606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.883525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.883697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.883715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.887553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.887746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.887764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.891600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.891760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.891779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.895648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.895820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.895839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.899731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.899911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.899929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.903655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.903838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.903856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.907784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.907955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.907976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.911827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.912007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.912026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.915723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.915904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.915923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.919603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.919784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.919803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.924036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.924221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.924239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.929214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.929406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.929424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.933480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.933645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.933663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.937477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.937658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.937677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.941505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.941668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.941686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.945397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.945580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.945601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.949416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.949593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.949612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.953744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.953913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.953931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.958302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.958480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.958498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.962330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.962521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.962539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.966386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.966565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.966583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.970314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.818 [2024-11-17 14:36:23.970488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-11-17 14:36:23.970507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.818 [2024-11-17 14:36:23.974360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.819 [2024-11-17 14:36:23.974531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-11-17 14:36:23.974548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.819 [2024-11-17 14:36:23.978344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.819 [2024-11-17 14:36:23.978534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-11-17 14:36:23.978552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.819 [2024-11-17 14:36:23.982387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.819 [2024-11-17 14:36:23.982561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-11-17 14:36:23.982580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.819 [2024-11-17 14:36:23.986396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.819 [2024-11-17 14:36:23.986580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-11-17 14:36:23.986599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.819 [2024-11-17 14:36:23.990329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.819 [2024-11-17 14:36:23.990522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-11-17 14:36:23.990540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.819 [2024-11-17 14:36:23.994393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.819 [2024-11-17 14:36:23.994572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-11-17 14:36:23.994590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.819 [2024-11-17 14:36:23.998408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.819 [2024-11-17 14:36:23.998595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-11-17 14:36:23.998613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.819 [2024-11-17 14:36:24.002360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.819 [2024-11-17 14:36:24.002542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-11-17 14:36:24.002560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.819 [2024-11-17 14:36:24.006320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.819 [2024-11-17 14:36:24.006508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-11-17 14:36:24.006526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.819 [2024-11-17 14:36:24.010127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.819 [2024-11-17 14:36:24.010314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-11-17 14:36:24.010332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.819 [2024-11-17 14:36:24.014107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.819 [2024-11-17 14:36:24.014290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-11-17 14:36:24.014308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.819 [2024-11-17 14:36:24.017949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.819 [2024-11-17 14:36:24.018130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-11-17 14:36:24.018149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.819 [2024-11-17 14:36:24.021805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.819 [2024-11-17 14:36:24.021988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-11-17 14:36:24.022006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.819 [2024-11-17 14:36:24.025624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.819 [2024-11-17 14:36:24.025803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-11-17 14:36:24.025820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.819 [2024-11-17 14:36:24.029447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.819 [2024-11-17 14:36:24.029620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-11-17 14:36:24.029639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.819 [2024-11-17 14:36:24.033320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:34.819 [2024-11-17 14:36:24.033513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-11-17 14:36:24.033532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.080 [2024-11-17 14:36:24.037217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.080 [2024-11-17 14:36:24.037405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.080 [2024-11-17 14:36:24.037423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.080 [2024-11-17 14:36:24.041123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.080 [2024-11-17 14:36:24.041308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.080 [2024-11-17 14:36:24.041327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.080 [2024-11-17 14:36:24.045187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.080 [2024-11-17 14:36:24.045376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.080 [2024-11-17 14:36:24.045394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.080 [2024-11-17 14:36:24.049053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.080 [2024-11-17 14:36:24.049238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.080 [2024-11-17 14:36:24.049259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.080 [2024-11-17 14:36:24.052860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.080 [2024-11-17 14:36:24.053041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.080 [2024-11-17 14:36:24.053058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.080 [2024-11-17 14:36:24.056694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.080 [2024-11-17 14:36:24.056865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.080 [2024-11-17 14:36:24.056883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.080 [2024-11-17 14:36:24.060478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.080 [2024-11-17 14:36:24.060665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.080 [2024-11-17 14:36:24.060683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.080 [2024-11-17 14:36:24.064462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.080 [2024-11-17 14:36:24.064645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.080 [2024-11-17 14:36:24.064663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.080 [2024-11-17 14:36:24.068288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.080 [2024-11-17 14:36:24.068473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.080 [2024-11-17 14:36:24.068491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.080 [2024-11-17 14:36:24.072080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.080 [2024-11-17 14:36:24.072257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.080 [2024-11-17 14:36:24.072276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.080 [2024-11-17 14:36:24.075895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.080 [2024-11-17 14:36:24.076078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.080 [2024-11-17 14:36:24.076097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.080 [2024-11-17 14:36:24.079712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.080 [2024-11-17 14:36:24.079896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.080 [2024-11-17 14:36:24.079913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.080 [2024-11-17 14:36:24.083546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.080 [2024-11-17 14:36:24.083739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.080 [2024-11-17 14:36:24.083757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.080 [2024-11-17 14:36:24.087378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.080 [2024-11-17 14:36:24.087560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.080 [2024-11-17 14:36:24.087578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.080 [2024-11-17 14:36:24.091347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.080 [2024-11-17 14:36:24.091529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.080 [2024-11-17 14:36:24.091547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.080 [2024-11-17 14:36:24.095314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.080 [2024-11-17 14:36:24.095507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.080 [2024-11-17 14:36:24.095525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.080 [2024-11-17 14:36:24.099335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.080 [2024-11-17 14:36:24.099519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.080 [2024-11-17 14:36:24.099538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.080 [2024-11-17 14:36:24.103328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.080 [2024-11-17 14:36:24.103531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.080 [2024-11-17 14:36:24.103549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.080 [2024-11-17 14:36:24.107331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.080 [2024-11-17 14:36:24.107503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.080 [2024-11-17 14:36:24.107521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.080 [2024-11-17 14:36:24.111280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.080 [2024-11-17 14:36:24.111472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.080 [2024-11-17 14:36:24.111490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.080 [2024-11-17 14:36:24.115246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.080 [2024-11-17 14:36:24.115435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.115456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.119346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.119540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.119558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.123467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.123531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.123549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.127542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.127719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.127738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.131475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.131648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.131666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.135485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.135665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.135682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.139484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.139666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.139684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.143322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.143506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.143524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.147333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.147526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.147545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.152264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.152439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.152460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.156708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.156876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.156894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.160933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.161108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.161126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.164910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.165093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.165112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.168855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.169040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.169058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.172861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.173049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.173067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.176731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.176909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.176927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.180719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.180894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.180912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.185087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.185255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.185274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.189487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.189667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.189685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.193561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.193735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.193754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.197620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.197799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.197818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.201581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.201763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.201782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.205594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.205769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.205787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.209660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.081 [2024-11-17 14:36:24.209838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.081 [2024-11-17 14:36:24.209856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.081 [2024-11-17 14:36:24.213691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.082 [2024-11-17 14:36:24.213876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.082 [2024-11-17 14:36:24.213894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.082 [2024-11-17 14:36:24.217680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.082 [2024-11-17 14:36:24.217844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.082 [2024-11-17 14:36:24.217862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.082 [2024-11-17 14:36:24.221658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.082 [2024-11-17 14:36:24.221838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.082 [2024-11-17 14:36:24.221857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.082 [2024-11-17 14:36:24.225862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.082 [2024-11-17 14:36:24.226032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.082 [2024-11-17 14:36:24.226050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.082 [2024-11-17 14:36:24.229866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.082 [2024-11-17 14:36:24.230048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.082 [2024-11-17 14:36:24.230066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.082 [2024-11-17 14:36:24.233840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.082 [2024-11-17 14:36:24.234024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.082 [2024-11-17 14:36:24.234042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.082 [2024-11-17 14:36:24.237861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.082 [2024-11-17 14:36:24.238037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.082 [2024-11-17 14:36:24.238055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.082 [2024-11-17 14:36:24.241836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.082 [2024-11-17 14:36:24.242017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.082 [2024-11-17 14:36:24.242036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.082 [2024-11-17 14:36:24.245793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.082 [2024-11-17 14:36:24.245954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.082 [2024-11-17 14:36:24.245972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.082 [2024-11-17 14:36:24.249828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.082 [2024-11-17 14:36:24.250012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.082 [2024-11-17 14:36:24.250030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.082 [2024-11-17 14:36:24.253780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.082 [2024-11-17 14:36:24.253949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.082 [2024-11-17 14:36:24.253966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.082 [2024-11-17 14:36:24.257746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.082 [2024-11-17 14:36:24.257929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.082 [2024-11-17 14:36:24.257951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.082 [2024-11-17 14:36:24.261589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x164bb30) with pdu=0x2000166ff3c8 00:26:35.082 [2024-11-17 14:36:24.261764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.082 [2024-11-17 14:36:24.261783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.082 6946.00 IOPS, 868.25 MiB/s 00:26:35.082 Latency(us) 00:26:35.082 [2024-11-17T13:36:24.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.082 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:35.082 nvme0n1 : 2.00 6944.70 868.09 0.00 0.00 2300.05 1745.25 7522.39 00:26:35.082 [2024-11-17T13:36:24.307Z] =================================================================================================================== 00:26:35.082 [2024-11-17T13:36:24.307Z] Total : 6944.70 868.09 0.00 0.00 2300.05 1745.25 7522.39 00:26:35.082 { 00:26:35.082 "results": [ 00:26:35.082 { 00:26:35.082 "job": "nvme0n1", 00:26:35.082 "core_mask": "0x2", 00:26:35.082 "workload": "randwrite", 00:26:35.082 "status": "finished", 00:26:35.082 "queue_depth": 16, 00:26:35.082 "io_size": 131072, 00:26:35.082 "runtime": 2.002677, 00:26:35.082 "iops": 6944.704513009337, 00:26:35.082 "mibps": 868.0880641261671, 00:26:35.082 "io_failed": 0, 00:26:35.082 "io_timeout": 0, 00:26:35.082 "avg_latency_us": 2300.048103437496, 00:26:35.082 "min_latency_us": 1745.2521739130434, 00:26:35.082 "max_latency_us": 7522.393043478261 00:26:35.082 } 00:26:35.082 ], 00:26:35.082 "core_count": 1 00:26:35.082 } 00:26:35.082 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:35.082 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:35.082 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:35.082 | .driver_specific 00:26:35.082 | .nvme_error 00:26:35.082 | .status_code 00:26:35.082 | .command_transient_transport_error' 00:26:35.082 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:35.342 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 449 > 0 )) 00:26:35.342 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1611876 00:26:35.342 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1611876 ']' 00:26:35.342 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1611876 00:26:35.342 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:35.342 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:35.342 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1611876 00:26:35.342 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:35.342 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:35.342 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1611876' 00:26:35.342 killing process with pid 1611876 00:26:35.342 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1611876 00:26:35.342 Received shutdown signal, test time was about 2.000000 seconds 00:26:35.342 00:26:35.342 Latency(us) 00:26:35.342 [2024-11-17T13:36:24.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.342 [2024-11-17T13:36:24.567Z] =================================================================================================================== 00:26:35.342 [2024-11-17T13:36:24.567Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:35.342 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1611876 00:26:35.602 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1610217 00:26:35.602 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1610217 ']' 00:26:35.602 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1610217 00:26:35.602 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:35.602 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:35.602 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1610217 00:26:35.602 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:35.602 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:35.602 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1610217' 00:26:35.602 killing process with pid 1610217 00:26:35.602 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1610217 00:26:35.602 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1610217 00:26:35.861 00:26:35.861 real 0m13.999s 00:26:35.861 user 0m26.774s 00:26:35.861 sys 0m4.578s 00:26:35.861 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:35.861 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.861 ************************************ 00:26:35.861 END TEST nvmf_digest_error 00:26:35.861 ************************************ 00:26:35.861 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:35.861 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:35.861 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:35.861 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:35.861 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:35.861 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:35.861 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:35.861 14:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:35.861 rmmod nvme_tcp 00:26:35.861 rmmod nvme_fabrics 00:26:35.861 rmmod nvme_keyring 00:26:35.861 14:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:35.861 14:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:35.861 14:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:35.861 14:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1610217 ']' 00:26:35.861 14:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1610217 00:26:35.861 14:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1610217 ']' 00:26:35.861 14:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1610217 00:26:35.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1610217) - No such process 00:26:35.861 14:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1610217 is not found' 00:26:35.861 Process with pid 1610217 is not found 00:26:35.861 14:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:35.861 14:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:35.861 14:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:35.861 14:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:35.861 14:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:35.861 14:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:35.861 14:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:35.861 14:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:35.861 14:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:35.861 14:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.861 14:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:35.861 14:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:38.397 00:26:38.397 real 0m37.147s 00:26:38.397 user 0m56.343s 00:26:38.397 sys 0m13.870s 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:38.397 ************************************ 00:26:38.397 END TEST nvmf_digest 00:26:38.397 ************************************ 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.397 ************************************ 00:26:38.397 START TEST nvmf_bdevperf 00:26:38.397 ************************************ 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:38.397 * Looking for test storage... 00:26:38.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:38.397 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:38.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.397 --rc genhtml_branch_coverage=1 00:26:38.398 --rc genhtml_function_coverage=1 00:26:38.398 --rc genhtml_legend=1 00:26:38.398 --rc geninfo_all_blocks=1 00:26:38.398 --rc geninfo_unexecuted_blocks=1 00:26:38.398 00:26:38.398 ' 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:38.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.398 --rc genhtml_branch_coverage=1 00:26:38.398 --rc genhtml_function_coverage=1 00:26:38.398 --rc genhtml_legend=1 00:26:38.398 --rc geninfo_all_blocks=1 00:26:38.398 --rc geninfo_unexecuted_blocks=1 00:26:38.398 00:26:38.398 ' 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:38.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.398 --rc genhtml_branch_coverage=1 00:26:38.398 --rc genhtml_function_coverage=1 00:26:38.398 --rc genhtml_legend=1 00:26:38.398 --rc geninfo_all_blocks=1 00:26:38.398 --rc geninfo_unexecuted_blocks=1 00:26:38.398 00:26:38.398 ' 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:38.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.398 --rc genhtml_branch_coverage=1 00:26:38.398 --rc genhtml_function_coverage=1 00:26:38.398 --rc genhtml_legend=1 00:26:38.398 --rc geninfo_all_blocks=1 00:26:38.398 --rc geninfo_unexecuted_blocks=1 00:26:38.398 00:26:38.398 ' 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:38.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:38.398 14:36:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:44.970 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:44.971 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:44.971 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:44.971 Found net devices under 0000:86:00.0: cvl_0_0 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:44.971 Found net devices under 0000:86:00.1: cvl_0_1 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:44.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:26:44.971 00:26:44.971 --- 10.0.0.2 ping statistics --- 00:26:44.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.971 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:44.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:26:44.971 00:26:44.971 --- 10.0.0.1 ping statistics --- 00:26:44.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.971 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:44.971 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1616021 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1616021 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1616021 ']' 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.972 [2024-11-17 14:36:33.376176] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:26:44.972 [2024-11-17 14:36:33.376220] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.972 [2024-11-17 14:36:33.438228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:44.972 [2024-11-17 14:36:33.478284] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.972 [2024-11-17 14:36:33.478324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.972 [2024-11-17 14:36:33.478332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.972 [2024-11-17 14:36:33.478338] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.972 [2024-11-17 14:36:33.478343] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.972 [2024-11-17 14:36:33.479674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:44.972 [2024-11-17 14:36:33.479803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.972 [2024-11-17 14:36:33.479804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.972 [2024-11-17 14:36:33.627042] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.972 Malloc0 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.972 [2024-11-17 14:36:33.692307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:44.972 { 00:26:44.972 "params": { 00:26:44.972 "name": "Nvme$subsystem", 00:26:44.972 "trtype": "$TEST_TRANSPORT", 00:26:44.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:44.972 "adrfam": "ipv4", 00:26:44.972 "trsvcid": "$NVMF_PORT", 00:26:44.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:44.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:44.972 "hdgst": ${hdgst:-false}, 00:26:44.972 "ddgst": ${ddgst:-false} 00:26:44.972 }, 00:26:44.972 "method": "bdev_nvme_attach_controller" 00:26:44.972 } 00:26:44.972 EOF 00:26:44.972 )") 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:44.972 14:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:44.972 "params": { 00:26:44.972 "name": "Nvme1", 00:26:44.972 "trtype": "tcp", 00:26:44.972 "traddr": "10.0.0.2", 00:26:44.972 "adrfam": "ipv4", 00:26:44.972 "trsvcid": "4420", 00:26:44.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:44.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:44.972 "hdgst": false, 00:26:44.972 "ddgst": false 00:26:44.972 }, 00:26:44.972 "method": "bdev_nvme_attach_controller" 00:26:44.972 }' 00:26:44.972 [2024-11-17 14:36:33.744089] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:26:44.972 [2024-11-17 14:36:33.744129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616126 ] 00:26:44.972 [2024-11-17 14:36:33.820456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.972 [2024-11-17 14:36:33.861957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.972 Running I/O for 1 seconds... 00:26:46.028 11031.00 IOPS, 43.09 MiB/s 00:26:46.028 Latency(us) 00:26:46.028 [2024-11-17T13:36:35.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.028 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:46.028 Verification LBA range: start 0x0 length 0x4000 00:26:46.028 Nvme1n1 : 1.01 11078.44 43.28 0.00 0.00 11508.21 1218.11 16184.54 00:26:46.028 [2024-11-17T13:36:35.253Z] =================================================================================================================== 00:26:46.028 [2024-11-17T13:36:35.253Z] Total : 11078.44 43.28 0.00 0.00 11508.21 1218.11 16184.54 00:26:46.028 14:36:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1616360 00:26:46.028 14:36:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:46.028 14:36:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:46.028 14:36:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:46.028 14:36:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:46.028 14:36:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:46.028 14:36:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:46.028 14:36:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:46.028 { 00:26:46.028 "params": { 00:26:46.028 "name": "Nvme$subsystem", 00:26:46.028 "trtype": "$TEST_TRANSPORT", 00:26:46.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.028 "adrfam": "ipv4", 00:26:46.028 "trsvcid": "$NVMF_PORT", 00:26:46.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.028 "hdgst": ${hdgst:-false}, 00:26:46.028 "ddgst": ${ddgst:-false} 00:26:46.028 }, 00:26:46.028 "method": "bdev_nvme_attach_controller" 00:26:46.028 } 00:26:46.028 EOF 00:26:46.028 )") 00:26:46.028 14:36:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:46.028 14:36:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:46.028 14:36:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:46.028 14:36:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:46.028 "params": { 00:26:46.028 "name": "Nvme1", 00:26:46.028 "trtype": "tcp", 00:26:46.028 "traddr": "10.0.0.2", 00:26:46.028 "adrfam": "ipv4", 00:26:46.028 "trsvcid": "4420", 00:26:46.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:46.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:46.028 "hdgst": false, 00:26:46.028 "ddgst": false 00:26:46.028 }, 00:26:46.028 "method": "bdev_nvme_attach_controller" 00:26:46.028 }' 00:26:46.028 [2024-11-17 14:36:35.232605] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:26:46.028 [2024-11-17 14:36:35.232653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616360 ] 00:26:46.288 [2024-11-17 14:36:35.307619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.288 [2024-11-17 14:36:35.346635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.288 Running I/O for 15 seconds... 00:26:48.602 10992.00 IOPS, 42.94 MiB/s [2024-11-17T13:36:38.397Z] 11131.00 IOPS, 43.48 MiB/s [2024-11-17T13:36:38.397Z] 14:36:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1616021 00:26:49.172 14:36:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:49.172 [2024-11-17 14:36:38.203901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.172 [2024-11-17 14:36:38.203945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.172 [2024-11-17 14:36:38.203963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.172 [2024-11-17 14:36:38.203972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.172 [2024-11-17 14:36:38.203982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.172 [2024-11-17 14:36:38.203989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.172 [2024-11-17 14:36:38.203999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.172 [2024-11-17 14:36:38.204006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.172 [2024-11-17 14:36:38.204015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.172 [2024-11-17 14:36:38.204022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.172 [2024-11-17 14:36:38.204030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.172 [2024-11-17 14:36:38.204039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.172 [2024-11-17 14:36:38.204048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.172 [2024-11-17 14:36:38.204054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.172 [2024-11-17 14:36:38.204063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.172 [2024-11-17 14:36:38.204073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.172 [2024-11-17 14:36:38.204081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.172 [2024-11-17 14:36:38.204088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.172 [2024-11-17 14:36:38.204096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.172 [2024-11-17 14:36:38.204103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.173 [2024-11-17 14:36:38.204233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.173 [2024-11-17 14:36:38.204862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.173 [2024-11-17 14:36:38.204868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.204877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.204885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.204893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.204901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.204910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.204916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.204924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.204931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.204940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.204947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.204955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.204962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.204970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.204976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.204984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.204991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.204999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-11-17 14:36:38.205463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.174 [2024-11-17 14:36:38.205473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.175 [2024-11-17 14:36:38.205613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.175 [2024-11-17 14:36:38.205627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.175 [2024-11-17 14:36:38.205642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.175 [2024-11-17 14:36:38.205656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.175 [2024-11-17 14:36:38.205676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.175 [2024-11-17 14:36:38.205690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.175 [2024-11-17 14:36:38.205705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.205986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.205996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.206002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.206010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.206017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.206025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.206031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.206040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.175 [2024-11-17 14:36:38.206047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.175 [2024-11-17 14:36:38.206055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a3d00 is same with the state(6) to be set 00:26:49.175 [2024-11-17 14:36:38.206064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:49.176 [2024-11-17 14:36:38.206069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:49.176 [2024-11-17 14:36:38.206075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107768 len:8 PRP1 0x0 PRP2 0x0 00:26:49.176 [2024-11-17 14:36:38.206083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.176 [2024-11-17 14:36:38.208991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.176 [2024-11-17 14:36:38.209046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.176 [2024-11-17 14:36:38.209585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.176 [2024-11-17 14:36:38.209603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.176 [2024-11-17 14:36:38.209612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.176 [2024-11-17 14:36:38.209792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.176 [2024-11-17 14:36:38.209971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.176 [2024-11-17 14:36:38.209980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.176 [2024-11-17 14:36:38.209989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.176 [2024-11-17 14:36:38.209996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.176 [2024-11-17 14:36:38.222249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.176 [2024-11-17 14:36:38.222613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.176 [2024-11-17 14:36:38.222633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.176 [2024-11-17 14:36:38.222642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.176 [2024-11-17 14:36:38.222806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.176 [2024-11-17 14:36:38.222971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.176 [2024-11-17 14:36:38.222980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.176 [2024-11-17 14:36:38.222987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.176 [2024-11-17 14:36:38.222994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.176 [2024-11-17 14:36:38.235104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.176 [2024-11-17 14:36:38.235477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.176 [2024-11-17 14:36:38.235525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.176 [2024-11-17 14:36:38.235556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.176 [2024-11-17 14:36:38.236055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.176 [2024-11-17 14:36:38.236220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.176 [2024-11-17 14:36:38.236229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.176 [2024-11-17 14:36:38.236236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.176 [2024-11-17 14:36:38.236242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.176 [2024-11-17 14:36:38.247960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.176 [2024-11-17 14:36:38.248337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.176 [2024-11-17 14:36:38.248360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.176 [2024-11-17 14:36:38.248369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.176 [2024-11-17 14:36:38.248534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.176 [2024-11-17 14:36:38.248697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.176 [2024-11-17 14:36:38.248707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.176 [2024-11-17 14:36:38.248713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.176 [2024-11-17 14:36:38.248720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.176 [2024-11-17 14:36:38.260789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.176 [2024-11-17 14:36:38.261211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.176 [2024-11-17 14:36:38.261262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.176 [2024-11-17 14:36:38.261286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.176 [2024-11-17 14:36:38.261808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.176 [2024-11-17 14:36:38.261973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.176 [2024-11-17 14:36:38.261981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.176 [2024-11-17 14:36:38.261988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.176 [2024-11-17 14:36:38.261993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.176 [2024-11-17 14:36:38.273840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.176 [2024-11-17 14:36:38.274245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.176 [2024-11-17 14:36:38.274263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.176 [2024-11-17 14:36:38.274271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.176 [2024-11-17 14:36:38.274451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.176 [2024-11-17 14:36:38.274629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.176 [2024-11-17 14:36:38.274638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.176 [2024-11-17 14:36:38.274645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.176 [2024-11-17 14:36:38.274652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.176 [2024-11-17 14:36:38.286767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.176 [2024-11-17 14:36:38.287183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.176 [2024-11-17 14:36:38.287200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.176 [2024-11-17 14:36:38.287208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.176 [2024-11-17 14:36:38.287378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.176 [2024-11-17 14:36:38.287542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.176 [2024-11-17 14:36:38.287551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.176 [2024-11-17 14:36:38.287558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.176 [2024-11-17 14:36:38.287564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.176 [2024-11-17 14:36:38.299588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.176 [2024-11-17 14:36:38.299984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.176 [2024-11-17 14:36:38.300001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.176 [2024-11-17 14:36:38.300008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.176 [2024-11-17 14:36:38.300172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.176 [2024-11-17 14:36:38.300336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.176 [2024-11-17 14:36:38.300344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.176 [2024-11-17 14:36:38.300357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.176 [2024-11-17 14:36:38.300364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.176 [2024-11-17 14:36:38.312447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.176 [2024-11-17 14:36:38.312866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.176 [2024-11-17 14:36:38.312902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.176 [2024-11-17 14:36:38.312927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.176 [2024-11-17 14:36:38.313523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.176 [2024-11-17 14:36:38.314077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.176 [2024-11-17 14:36:38.314095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.176 [2024-11-17 14:36:38.314116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.176 [2024-11-17 14:36:38.314130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.176 [2024-11-17 14:36:38.327291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.176 [2024-11-17 14:36:38.327824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.176 [2024-11-17 14:36:38.327871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.176 [2024-11-17 14:36:38.327893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.177 [2024-11-17 14:36:38.328369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.177 [2024-11-17 14:36:38.328627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.177 [2024-11-17 14:36:38.328640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.177 [2024-11-17 14:36:38.328650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.177 [2024-11-17 14:36:38.328660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.177 [2024-11-17 14:36:38.340202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.177 [2024-11-17 14:36:38.340627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.177 [2024-11-17 14:36:38.340645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.177 [2024-11-17 14:36:38.340653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.177 [2024-11-17 14:36:38.340820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.177 [2024-11-17 14:36:38.340989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.177 [2024-11-17 14:36:38.340998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.177 [2024-11-17 14:36:38.341004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.177 [2024-11-17 14:36:38.341012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.177 [2024-11-17 14:36:38.353109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.177 [2024-11-17 14:36:38.353531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.177 [2024-11-17 14:36:38.353548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.177 [2024-11-17 14:36:38.353556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.177 [2024-11-17 14:36:38.353719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.177 [2024-11-17 14:36:38.353884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.177 [2024-11-17 14:36:38.353893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.177 [2024-11-17 14:36:38.353900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.177 [2024-11-17 14:36:38.353906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.177 [2024-11-17 14:36:38.366006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.177 [2024-11-17 14:36:38.366425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.177 [2024-11-17 14:36:38.366443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.177 [2024-11-17 14:36:38.366450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.177 [2024-11-17 14:36:38.366613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.177 [2024-11-17 14:36:38.366777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.177 [2024-11-17 14:36:38.366787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.177 [2024-11-17 14:36:38.366793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.177 [2024-11-17 14:36:38.366799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.177 [2024-11-17 14:36:38.378894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.177 [2024-11-17 14:36:38.379314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.177 [2024-11-17 14:36:38.379331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.177 [2024-11-17 14:36:38.379338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.177 [2024-11-17 14:36:38.379509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.177 [2024-11-17 14:36:38.379674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.177 [2024-11-17 14:36:38.379684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.177 [2024-11-17 14:36:38.379690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.177 [2024-11-17 14:36:38.379697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.438 [2024-11-17 14:36:38.391977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.438 [2024-11-17 14:36:38.392413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.438 [2024-11-17 14:36:38.392459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.438 [2024-11-17 14:36:38.392483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.438 [2024-11-17 14:36:38.392966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.438 [2024-11-17 14:36:38.393148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.438 [2024-11-17 14:36:38.393156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.438 [2024-11-17 14:36:38.393163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.438 [2024-11-17 14:36:38.393169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.438 [2024-11-17 14:36:38.404942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.438 [2024-11-17 14:36:38.405380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.438 [2024-11-17 14:36:38.405427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.438 [2024-11-17 14:36:38.405459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.438 [2024-11-17 14:36:38.406039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.438 [2024-11-17 14:36:38.406508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.438 [2024-11-17 14:36:38.406518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.438 [2024-11-17 14:36:38.406524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.438 [2024-11-17 14:36:38.406532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.438 [2024-11-17 14:36:38.417796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.438 [2024-11-17 14:36:38.418173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.438 [2024-11-17 14:36:38.418190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.438 [2024-11-17 14:36:38.418200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.438 [2024-11-17 14:36:38.418372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.438 [2024-11-17 14:36:38.418538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.438 [2024-11-17 14:36:38.418547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.438 [2024-11-17 14:36:38.418554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.438 [2024-11-17 14:36:38.418561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.438 [2024-11-17 14:36:38.430661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.438 [2024-11-17 14:36:38.431070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.438 [2024-11-17 14:36:38.431115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.438 [2024-11-17 14:36:38.431139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.438 [2024-11-17 14:36:38.431731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.438 [2024-11-17 14:36:38.432318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.438 [2024-11-17 14:36:38.432327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.438 [2024-11-17 14:36:38.432333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.438 [2024-11-17 14:36:38.432340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.438 [2024-11-17 14:36:38.443525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.438 [2024-11-17 14:36:38.443864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.438 [2024-11-17 14:36:38.443908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.438 [2024-11-17 14:36:38.443931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.438 [2024-11-17 14:36:38.444438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.438 [2024-11-17 14:36:38.444607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.438 [2024-11-17 14:36:38.444614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.438 [2024-11-17 14:36:38.444621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.438 [2024-11-17 14:36:38.444627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.438 [2024-11-17 14:36:38.456361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.438 [2024-11-17 14:36:38.456739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.438 [2024-11-17 14:36:38.456756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.438 [2024-11-17 14:36:38.456764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.438 [2024-11-17 14:36:38.456928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.438 [2024-11-17 14:36:38.457092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.438 [2024-11-17 14:36:38.457102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.438 [2024-11-17 14:36:38.457108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.438 [2024-11-17 14:36:38.457115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.438 [2024-11-17 14:36:38.469434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.438 [2024-11-17 14:36:38.469765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.438 [2024-11-17 14:36:38.469784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.438 [2024-11-17 14:36:38.469792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.438 [2024-11-17 14:36:38.469971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.438 [2024-11-17 14:36:38.470151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.438 [2024-11-17 14:36:38.470162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.438 [2024-11-17 14:36:38.470170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.438 [2024-11-17 14:36:38.470178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.438 [2024-11-17 14:36:38.482586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.438 [2024-11-17 14:36:38.483017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.438 [2024-11-17 14:36:38.483035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.438 [2024-11-17 14:36:38.483045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.438 [2024-11-17 14:36:38.483223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.438 [2024-11-17 14:36:38.483418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.438 [2024-11-17 14:36:38.483429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.438 [2024-11-17 14:36:38.483439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.438 [2024-11-17 14:36:38.483446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.438 [2024-11-17 14:36:38.495520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.438 [2024-11-17 14:36:38.495954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.438 [2024-11-17 14:36:38.495999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.438 [2024-11-17 14:36:38.496022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.439 [2024-11-17 14:36:38.496526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.439 [2024-11-17 14:36:38.496692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.439 [2024-11-17 14:36:38.496701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.439 [2024-11-17 14:36:38.496708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.439 [2024-11-17 14:36:38.496714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.439 [2024-11-17 14:36:38.508344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.439 [2024-11-17 14:36:38.508763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.439 [2024-11-17 14:36:38.508806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.439 [2024-11-17 14:36:38.508830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.439 [2024-11-17 14:36:38.509372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.439 [2024-11-17 14:36:38.509538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.439 [2024-11-17 14:36:38.509547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.439 [2024-11-17 14:36:38.509554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.439 [2024-11-17 14:36:38.509560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.439 9912.33 IOPS, 38.72 MiB/s [2024-11-17T13:36:38.664Z] [2024-11-17 14:36:38.521173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.439 [2024-11-17 14:36:38.521578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.439 [2024-11-17 14:36:38.521596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.439 [2024-11-17 14:36:38.521604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.439 [2024-11-17 14:36:38.521767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.439 [2024-11-17 14:36:38.521931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.439 [2024-11-17 14:36:38.521941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.439 [2024-11-17 14:36:38.521947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.439 [2024-11-17 14:36:38.521953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.439 [2024-11-17 14:36:38.534044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.439 [2024-11-17 14:36:38.534459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.439 [2024-11-17 14:36:38.534500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.439 [2024-11-17 14:36:38.534525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.439 [2024-11-17 14:36:38.535106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.439 [2024-11-17 14:36:38.535640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.439 [2024-11-17 14:36:38.535650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.439 [2024-11-17 14:36:38.535656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.439 [2024-11-17 14:36:38.535663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.439 [2024-11-17 14:36:38.546851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.439 [2024-11-17 14:36:38.547285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.439 [2024-11-17 14:36:38.547331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.439 [2024-11-17 14:36:38.547370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.439 [2024-11-17 14:36:38.547953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.439 [2024-11-17 14:36:38.548548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.439 [2024-11-17 14:36:38.548575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.439 [2024-11-17 14:36:38.548597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.439 [2024-11-17 14:36:38.548617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.439 [2024-11-17 14:36:38.559760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.439 [2024-11-17 14:36:38.560082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.439 [2024-11-17 14:36:38.560099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.439 [2024-11-17 14:36:38.560106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.439 [2024-11-17 14:36:38.560269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.439 [2024-11-17 14:36:38.560439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.439 [2024-11-17 14:36:38.560449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.439 [2024-11-17 14:36:38.560456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.439 [2024-11-17 14:36:38.560463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.439 [2024-11-17 14:36:38.572561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.439 [2024-11-17 14:36:38.572989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.439 [2024-11-17 14:36:38.573005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.439 [2024-11-17 14:36:38.573016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.439 [2024-11-17 14:36:38.573179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.439 [2024-11-17 14:36:38.573344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.439 [2024-11-17 14:36:38.573359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.439 [2024-11-17 14:36:38.573366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.439 [2024-11-17 14:36:38.573373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.439 [2024-11-17 14:36:38.585485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.439 [2024-11-17 14:36:38.585926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.439 [2024-11-17 14:36:38.585942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.439 [2024-11-17 14:36:38.585950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.439 [2024-11-17 14:36:38.586114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.439 [2024-11-17 14:36:38.586277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.439 [2024-11-17 14:36:38.586286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.439 [2024-11-17 14:36:38.586293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.439 [2024-11-17 14:36:38.586299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.439 [2024-11-17 14:36:38.598489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.439 [2024-11-17 14:36:38.598848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.439 [2024-11-17 14:36:38.598865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.439 [2024-11-17 14:36:38.598872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.439 [2024-11-17 14:36:38.599045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.439 [2024-11-17 14:36:38.599218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.439 [2024-11-17 14:36:38.599228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.439 [2024-11-17 14:36:38.599234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.439 [2024-11-17 14:36:38.599241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.439 [2024-11-17 14:36:38.611588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.439 [2024-11-17 14:36:38.611995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.439 [2024-11-17 14:36:38.612013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.439 [2024-11-17 14:36:38.612021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.439 [2024-11-17 14:36:38.612194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.439 [2024-11-17 14:36:38.612378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.439 [2024-11-17 14:36:38.612388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.439 [2024-11-17 14:36:38.612395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.439 [2024-11-17 14:36:38.612402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.439 [2024-11-17 14:36:38.624511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.440 [2024-11-17 14:36:38.624907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.440 [2024-11-17 14:36:38.624924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.440 [2024-11-17 14:36:38.624932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.440 [2024-11-17 14:36:38.625095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.440 [2024-11-17 14:36:38.625259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.440 [2024-11-17 14:36:38.625269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.440 [2024-11-17 14:36:38.625275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.440 [2024-11-17 14:36:38.625281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.440 [2024-11-17 14:36:38.637390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.440 [2024-11-17 14:36:38.637835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.440 [2024-11-17 14:36:38.637879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.440 [2024-11-17 14:36:38.637901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.440 [2024-11-17 14:36:38.638501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.440 [2024-11-17 14:36:38.638928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.440 [2024-11-17 14:36:38.638937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.440 [2024-11-17 14:36:38.638944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.440 [2024-11-17 14:36:38.638950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.440 [2024-11-17 14:36:38.650190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.440 [2024-11-17 14:36:38.650533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.440 [2024-11-17 14:36:38.650551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.440 [2024-11-17 14:36:38.650559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.440 [2024-11-17 14:36:38.650724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.440 [2024-11-17 14:36:38.650889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.440 [2024-11-17 14:36:38.650898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.440 [2024-11-17 14:36:38.650912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.440 [2024-11-17 14:36:38.650919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.700 [2024-11-17 14:36:38.663336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.700 [2024-11-17 14:36:38.663818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.700 [2024-11-17 14:36:38.663865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.700 [2024-11-17 14:36:38.663890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.700 [2024-11-17 14:36:38.664487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.700 [2024-11-17 14:36:38.665053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.700 [2024-11-17 14:36:38.665063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.700 [2024-11-17 14:36:38.665069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.700 [2024-11-17 14:36:38.665075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.700 [2024-11-17 14:36:38.676336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.700 [2024-11-17 14:36:38.676713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.700 [2024-11-17 14:36:38.676731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.700 [2024-11-17 14:36:38.676738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.700 [2024-11-17 14:36:38.676902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.700 [2024-11-17 14:36:38.677064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.700 [2024-11-17 14:36:38.677073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.700 [2024-11-17 14:36:38.677079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.700 [2024-11-17 14:36:38.677086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.700 [2024-11-17 14:36:38.689273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.700 [2024-11-17 14:36:38.689706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.700 [2024-11-17 14:36:38.689751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.700 [2024-11-17 14:36:38.689775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.700 [2024-11-17 14:36:38.690142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.700 [2024-11-17 14:36:38.690306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.700 [2024-11-17 14:36:38.690315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.700 [2024-11-17 14:36:38.690321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.700 [2024-11-17 14:36:38.690327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.700 [2024-11-17 14:36:38.702147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.700 [2024-11-17 14:36:38.702508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.700 [2024-11-17 14:36:38.702525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.700 [2024-11-17 14:36:38.702534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.700 [2024-11-17 14:36:38.702697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.700 [2024-11-17 14:36:38.702861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.700 [2024-11-17 14:36:38.702870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.700 [2024-11-17 14:36:38.702877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.700 [2024-11-17 14:36:38.702883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.700 [2024-11-17 14:36:38.714992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.700 [2024-11-17 14:36:38.715407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.700 [2024-11-17 14:36:38.715425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.700 [2024-11-17 14:36:38.715434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.700 [2024-11-17 14:36:38.715597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.700 [2024-11-17 14:36:38.715760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.700 [2024-11-17 14:36:38.715770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.700 [2024-11-17 14:36:38.715776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.700 [2024-11-17 14:36:38.715782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.700 [2024-11-17 14:36:38.728033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.700 [2024-11-17 14:36:38.728386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.700 [2024-11-17 14:36:38.728404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.700 [2024-11-17 14:36:38.728412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.700 [2024-11-17 14:36:38.728583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.700 [2024-11-17 14:36:38.728757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.700 [2024-11-17 14:36:38.728766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.700 [2024-11-17 14:36:38.728773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.700 [2024-11-17 14:36:38.728779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.700 [2024-11-17 14:36:38.740932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.700 [2024-11-17 14:36:38.741356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.700 [2024-11-17 14:36:38.741373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.700 [2024-11-17 14:36:38.741384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.700 [2024-11-17 14:36:38.741547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.700 [2024-11-17 14:36:38.741712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.700 [2024-11-17 14:36:38.741721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.700 [2024-11-17 14:36:38.741727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.700 [2024-11-17 14:36:38.741734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.700 [2024-11-17 14:36:38.753823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.700 [2024-11-17 14:36:38.754240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.700 [2024-11-17 14:36:38.754257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.700 [2024-11-17 14:36:38.754265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.701 [2024-11-17 14:36:38.754434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.701 [2024-11-17 14:36:38.754598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.701 [2024-11-17 14:36:38.754607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.701 [2024-11-17 14:36:38.754613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.701 [2024-11-17 14:36:38.754619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.701 [2024-11-17 14:36:38.766661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.701 [2024-11-17 14:36:38.766999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.701 [2024-11-17 14:36:38.767015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.701 [2024-11-17 14:36:38.767022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.701 [2024-11-17 14:36:38.767186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.701 [2024-11-17 14:36:38.767350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.701 [2024-11-17 14:36:38.767365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.701 [2024-11-17 14:36:38.767371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.701 [2024-11-17 14:36:38.767395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.701 [2024-11-17 14:36:38.779508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.701 [2024-11-17 14:36:38.779910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.701 [2024-11-17 14:36:38.779927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.701 [2024-11-17 14:36:38.779934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.701 [2024-11-17 14:36:38.780097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.701 [2024-11-17 14:36:38.780264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.701 [2024-11-17 14:36:38.780273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.701 [2024-11-17 14:36:38.780280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.701 [2024-11-17 14:36:38.780286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.701 [2024-11-17 14:36:38.792403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.701 [2024-11-17 14:36:38.792729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.701 [2024-11-17 14:36:38.792746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.701 [2024-11-17 14:36:38.792754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.701 [2024-11-17 14:36:38.792917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.701 [2024-11-17 14:36:38.793081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.701 [2024-11-17 14:36:38.793090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.701 [2024-11-17 14:36:38.793096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.701 [2024-11-17 14:36:38.793103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.701 [2024-11-17 14:36:38.805215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.701 [2024-11-17 14:36:38.805620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.701 [2024-11-17 14:36:38.805637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.701 [2024-11-17 14:36:38.805645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.701 [2024-11-17 14:36:38.805808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.701 [2024-11-17 14:36:38.805971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.701 [2024-11-17 14:36:38.805980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.701 [2024-11-17 14:36:38.805987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.701 [2024-11-17 14:36:38.805993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.701 [2024-11-17 14:36:38.818112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.701 [2024-11-17 14:36:38.818389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.701 [2024-11-17 14:36:38.818406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.701 [2024-11-17 14:36:38.818413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.701 [2024-11-17 14:36:38.818577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.701 [2024-11-17 14:36:38.818741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.701 [2024-11-17 14:36:38.818750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.701 [2024-11-17 14:36:38.818761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.701 [2024-11-17 14:36:38.818768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.701 [2024-11-17 14:36:38.831041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.701 [2024-11-17 14:36:38.831438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.701 [2024-11-17 14:36:38.831456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.701 [2024-11-17 14:36:38.831464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.701 [2024-11-17 14:36:38.831629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.701 [2024-11-17 14:36:38.831793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.701 [2024-11-17 14:36:38.831802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.701 [2024-11-17 14:36:38.831809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.701 [2024-11-17 14:36:38.831816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.701 [2024-11-17 14:36:38.844207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.701 [2024-11-17 14:36:38.844625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.701 [2024-11-17 14:36:38.844643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.701 [2024-11-17 14:36:38.844651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.701 [2024-11-17 14:36:38.844829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.701 [2024-11-17 14:36:38.845008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.701 [2024-11-17 14:36:38.845017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.701 [2024-11-17 14:36:38.845024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.701 [2024-11-17 14:36:38.845032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.701 [2024-11-17 14:36:38.857055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.701 [2024-11-17 14:36:38.857472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.701 [2024-11-17 14:36:38.857489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.701 [2024-11-17 14:36:38.857497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.701 [2024-11-17 14:36:38.857660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.701 [2024-11-17 14:36:38.857824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.701 [2024-11-17 14:36:38.857833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.701 [2024-11-17 14:36:38.857839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.701 [2024-11-17 14:36:38.857846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.701 [2024-11-17 14:36:38.869889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.701 [2024-11-17 14:36:38.870164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.701 [2024-11-17 14:36:38.870181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.701 [2024-11-17 14:36:38.870188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.701 [2024-11-17 14:36:38.870357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.701 [2024-11-17 14:36:38.870523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.701 [2024-11-17 14:36:38.870532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.701 [2024-11-17 14:36:38.870539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.701 [2024-11-17 14:36:38.870545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.701 [2024-11-17 14:36:38.882743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.701 [2024-11-17 14:36:38.883011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.701 [2024-11-17 14:36:38.883028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.702 [2024-11-17 14:36:38.883035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.702 [2024-11-17 14:36:38.883199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.702 [2024-11-17 14:36:38.883366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.702 [2024-11-17 14:36:38.883376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.702 [2024-11-17 14:36:38.883382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.702 [2024-11-17 14:36:38.883389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.702 [2024-11-17 14:36:38.895650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.702 [2024-11-17 14:36:38.895970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.702 [2024-11-17 14:36:38.895986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.702 [2024-11-17 14:36:38.895994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.702 [2024-11-17 14:36:38.896157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.702 [2024-11-17 14:36:38.896321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.702 [2024-11-17 14:36:38.896331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.702 [2024-11-17 14:36:38.896337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.702 [2024-11-17 14:36:38.896343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.702 [2024-11-17 14:36:38.908539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.702 [2024-11-17 14:36:38.908860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.702 [2024-11-17 14:36:38.908877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.702 [2024-11-17 14:36:38.908888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.702 [2024-11-17 14:36:38.909051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.702 [2024-11-17 14:36:38.909215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.702 [2024-11-17 14:36:38.909224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.702 [2024-11-17 14:36:38.909230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.702 [2024-11-17 14:36:38.909236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.962 [2024-11-17 14:36:38.921595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.962 [2024-11-17 14:36:38.921932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.962 [2024-11-17 14:36:38.921977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.962 [2024-11-17 14:36:38.922001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.962 [2024-11-17 14:36:38.922536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.963 [2024-11-17 14:36:38.922712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.963 [2024-11-17 14:36:38.922722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.963 [2024-11-17 14:36:38.922729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.963 [2024-11-17 14:36:38.922736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.963 [2024-11-17 14:36:38.934514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.963 [2024-11-17 14:36:38.934837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.963 [2024-11-17 14:36:38.934854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.963 [2024-11-17 14:36:38.934862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.963 [2024-11-17 14:36:38.935025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.963 [2024-11-17 14:36:38.935188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.963 [2024-11-17 14:36:38.935197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.963 [2024-11-17 14:36:38.935204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.963 [2024-11-17 14:36:38.935210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.963 [2024-11-17 14:36:38.947323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.963 [2024-11-17 14:36:38.947603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.963 [2024-11-17 14:36:38.947620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.963 [2024-11-17 14:36:38.947627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.963 [2024-11-17 14:36:38.947790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.963 [2024-11-17 14:36:38.947957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.963 [2024-11-17 14:36:38.947966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.963 [2024-11-17 14:36:38.947972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.963 [2024-11-17 14:36:38.947978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.963 [2024-11-17 14:36:38.960397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.963 [2024-11-17 14:36:38.960724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.963 [2024-11-17 14:36:38.960742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.963 [2024-11-17 14:36:38.960750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.963 [2024-11-17 14:36:38.960928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.963 [2024-11-17 14:36:38.961107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.963 [2024-11-17 14:36:38.961117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.963 [2024-11-17 14:36:38.961124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.963 [2024-11-17 14:36:38.961131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.963 [2024-11-17 14:36:38.973495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.963 [2024-11-17 14:36:38.973907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.963 [2024-11-17 14:36:38.973926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.963 [2024-11-17 14:36:38.973934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.963 [2024-11-17 14:36:38.974112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.963 [2024-11-17 14:36:38.974293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.963 [2024-11-17 14:36:38.974303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.963 [2024-11-17 14:36:38.974309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.963 [2024-11-17 14:36:38.974317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.963 [2024-11-17 14:36:38.986697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.963 [2024-11-17 14:36:38.987065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.963 [2024-11-17 14:36:38.987082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.963 [2024-11-17 14:36:38.987090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.963 [2024-11-17 14:36:38.987268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.963 [2024-11-17 14:36:38.987453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.963 [2024-11-17 14:36:38.987463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.963 [2024-11-17 14:36:38.987475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.963 [2024-11-17 14:36:38.987481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.963 [2024-11-17 14:36:38.999848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.963 [2024-11-17 14:36:39.000171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.963 [2024-11-17 14:36:39.000189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.963 [2024-11-17 14:36:39.000198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.963 [2024-11-17 14:36:39.000381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.963 [2024-11-17 14:36:39.000561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.963 [2024-11-17 14:36:39.000570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.963 [2024-11-17 14:36:39.000577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.963 [2024-11-17 14:36:39.000584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.963 [2024-11-17 14:36:39.012958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.963 [2024-11-17 14:36:39.013398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.963 [2024-11-17 14:36:39.013416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.963 [2024-11-17 14:36:39.013424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.963 [2024-11-17 14:36:39.013602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.963 [2024-11-17 14:36:39.013781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.963 [2024-11-17 14:36:39.013791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.963 [2024-11-17 14:36:39.013798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.963 [2024-11-17 14:36:39.013805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.963 [2024-11-17 14:36:39.026008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.963 [2024-11-17 14:36:39.026346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.963 [2024-11-17 14:36:39.026370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.963 [2024-11-17 14:36:39.026378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.963 [2024-11-17 14:36:39.026556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.963 [2024-11-17 14:36:39.026734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.963 [2024-11-17 14:36:39.026743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.963 [2024-11-17 14:36:39.026750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.963 [2024-11-17 14:36:39.026757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.963 [2024-11-17 14:36:39.039145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.963 [2024-11-17 14:36:39.039573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.963 [2024-11-17 14:36:39.039591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.963 [2024-11-17 14:36:39.039599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.963 [2024-11-17 14:36:39.039777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.963 [2024-11-17 14:36:39.039956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.963 [2024-11-17 14:36:39.039966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.963 [2024-11-17 14:36:39.039973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.963 [2024-11-17 14:36:39.039979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.963 [2024-11-17 14:36:39.052343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.963 [2024-11-17 14:36:39.052785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.964 [2024-11-17 14:36:39.052802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.964 [2024-11-17 14:36:39.052810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.964 [2024-11-17 14:36:39.052988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.964 [2024-11-17 14:36:39.053166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.964 [2024-11-17 14:36:39.053175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.964 [2024-11-17 14:36:39.053182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.964 [2024-11-17 14:36:39.053189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.964 [2024-11-17 14:36:39.065631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.964 [2024-11-17 14:36:39.066029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.964 [2024-11-17 14:36:39.066047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.964 [2024-11-17 14:36:39.066055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.964 [2024-11-17 14:36:39.066233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.964 [2024-11-17 14:36:39.066417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.964 [2024-11-17 14:36:39.066427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.964 [2024-11-17 14:36:39.066434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.964 [2024-11-17 14:36:39.066441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.964 [2024-11-17 14:36:39.078804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.964 [2024-11-17 14:36:39.079236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.964 [2024-11-17 14:36:39.079254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.964 [2024-11-17 14:36:39.079264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.964 [2024-11-17 14:36:39.079448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.964 [2024-11-17 14:36:39.079627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.964 [2024-11-17 14:36:39.079637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.964 [2024-11-17 14:36:39.079644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.964 [2024-11-17 14:36:39.079650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.964 [2024-11-17 14:36:39.092011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.964 [2024-11-17 14:36:39.092441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.964 [2024-11-17 14:36:39.092458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.964 [2024-11-17 14:36:39.092466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.964 [2024-11-17 14:36:39.092644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.964 [2024-11-17 14:36:39.092825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.964 [2024-11-17 14:36:39.092835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.964 [2024-11-17 14:36:39.092842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.964 [2024-11-17 14:36:39.092849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.964 [2024-11-17 14:36:39.105065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.964 [2024-11-17 14:36:39.105494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.964 [2024-11-17 14:36:39.105513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.964 [2024-11-17 14:36:39.105520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.964 [2024-11-17 14:36:39.105700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.964 [2024-11-17 14:36:39.105878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.964 [2024-11-17 14:36:39.105887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.964 [2024-11-17 14:36:39.105894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.964 [2024-11-17 14:36:39.105901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.964 [2024-11-17 14:36:39.118113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.964 [2024-11-17 14:36:39.118544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.964 [2024-11-17 14:36:39.118562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.964 [2024-11-17 14:36:39.118570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.964 [2024-11-17 14:36:39.118749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.964 [2024-11-17 14:36:39.118933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.964 [2024-11-17 14:36:39.118943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.964 [2024-11-17 14:36:39.118950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.964 [2024-11-17 14:36:39.118957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.964 [2024-11-17 14:36:39.131160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.964 [2024-11-17 14:36:39.131593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.964 [2024-11-17 14:36:39.131610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.964 [2024-11-17 14:36:39.131619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.964 [2024-11-17 14:36:39.131797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.964 [2024-11-17 14:36:39.131976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.964 [2024-11-17 14:36:39.131986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.964 [2024-11-17 14:36:39.131993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.964 [2024-11-17 14:36:39.131999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.964 [2024-11-17 14:36:39.144219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.964 [2024-11-17 14:36:39.144662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.964 [2024-11-17 14:36:39.144680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.964 [2024-11-17 14:36:39.144688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.964 [2024-11-17 14:36:39.144866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.964 [2024-11-17 14:36:39.145044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.964 [2024-11-17 14:36:39.145054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.964 [2024-11-17 14:36:39.145061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.964 [2024-11-17 14:36:39.145067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.964 [2024-11-17 14:36:39.157276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.964 [2024-11-17 14:36:39.157636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.964 [2024-11-17 14:36:39.157655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.964 [2024-11-17 14:36:39.157662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.964 [2024-11-17 14:36:39.157841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.964 [2024-11-17 14:36:39.158021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.964 [2024-11-17 14:36:39.158031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.964 [2024-11-17 14:36:39.158041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.964 [2024-11-17 14:36:39.158049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.964 [2024-11-17 14:36:39.170432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.964 [2024-11-17 14:36:39.170864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.964 [2024-11-17 14:36:39.170881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:49.964 [2024-11-17 14:36:39.170889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:49.964 [2024-11-17 14:36:39.171067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:49.964 [2024-11-17 14:36:39.171245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.964 [2024-11-17 14:36:39.171255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.964 [2024-11-17 14:36:39.171262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.964 [2024-11-17 14:36:39.171269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.224 [2024-11-17 14:36:39.183474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.224 [2024-11-17 14:36:39.183843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.224 [2024-11-17 14:36:39.183861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.224 [2024-11-17 14:36:39.183869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.224 [2024-11-17 14:36:39.184046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.224 [2024-11-17 14:36:39.184223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.224 [2024-11-17 14:36:39.184233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.224 [2024-11-17 14:36:39.184240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.224 [2024-11-17 14:36:39.184247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.224 [2024-11-17 14:36:39.196628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.224 [2024-11-17 14:36:39.196973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.224 [2024-11-17 14:36:39.196990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.225 [2024-11-17 14:36:39.196998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.225 [2024-11-17 14:36:39.197175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.225 [2024-11-17 14:36:39.197359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.225 [2024-11-17 14:36:39.197370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.225 [2024-11-17 14:36:39.197377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.225 [2024-11-17 14:36:39.197385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.225 [2024-11-17 14:36:39.209764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.225 [2024-11-17 14:36:39.210106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.225 [2024-11-17 14:36:39.210123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.225 [2024-11-17 14:36:39.210131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.225 [2024-11-17 14:36:39.210303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.225 [2024-11-17 14:36:39.210481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.225 [2024-11-17 14:36:39.210490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.225 [2024-11-17 14:36:39.210496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.225 [2024-11-17 14:36:39.210502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.225 [2024-11-17 14:36:39.222819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.225 [2024-11-17 14:36:39.223223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.225 [2024-11-17 14:36:39.223240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.225 [2024-11-17 14:36:39.223248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.225 [2024-11-17 14:36:39.223425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.225 [2024-11-17 14:36:39.223600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.225 [2024-11-17 14:36:39.223609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.225 [2024-11-17 14:36:39.223616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.225 [2024-11-17 14:36:39.223623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.225 [2024-11-17 14:36:39.235955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.225 [2024-11-17 14:36:39.236362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.225 [2024-11-17 14:36:39.236380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.225 [2024-11-17 14:36:39.236388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.225 [2024-11-17 14:36:39.236560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.225 [2024-11-17 14:36:39.236733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.225 [2024-11-17 14:36:39.236742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.225 [2024-11-17 14:36:39.236749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.225 [2024-11-17 14:36:39.236755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.225 [2024-11-17 14:36:39.248886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.225 [2024-11-17 14:36:39.249228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.225 [2024-11-17 14:36:39.249244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.225 [2024-11-17 14:36:39.249255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.225 [2024-11-17 14:36:39.249421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.225 [2024-11-17 14:36:39.249585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.225 [2024-11-17 14:36:39.249595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.225 [2024-11-17 14:36:39.249601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.225 [2024-11-17 14:36:39.249607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.225 [2024-11-17 14:36:39.261825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.225 [2024-11-17 14:36:39.262251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.225 [2024-11-17 14:36:39.262269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.225 [2024-11-17 14:36:39.262277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.225 [2024-11-17 14:36:39.262455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.225 [2024-11-17 14:36:39.262628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.225 [2024-11-17 14:36:39.262637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.225 [2024-11-17 14:36:39.262643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.225 [2024-11-17 14:36:39.262650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.225 [2024-11-17 14:36:39.274785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.225 [2024-11-17 14:36:39.275196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.225 [2024-11-17 14:36:39.275213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.225 [2024-11-17 14:36:39.275220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.225 [2024-11-17 14:36:39.275389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.225 [2024-11-17 14:36:39.275552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.225 [2024-11-17 14:36:39.275561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.225 [2024-11-17 14:36:39.275567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.225 [2024-11-17 14:36:39.275573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.225 [2024-11-17 14:36:39.287653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.225 [2024-11-17 14:36:39.288052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.225 [2024-11-17 14:36:39.288069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.225 [2024-11-17 14:36:39.288077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.225 [2024-11-17 14:36:39.288239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.225 [2024-11-17 14:36:39.288414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.225 [2024-11-17 14:36:39.288424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.225 [2024-11-17 14:36:39.288431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.225 [2024-11-17 14:36:39.288438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.225 [2024-11-17 14:36:39.300546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.225 [2024-11-17 14:36:39.300962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.225 [2024-11-17 14:36:39.300979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.225 [2024-11-17 14:36:39.300986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.225 [2024-11-17 14:36:39.301148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.225 [2024-11-17 14:36:39.301312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.225 [2024-11-17 14:36:39.301321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.225 [2024-11-17 14:36:39.301328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.225 [2024-11-17 14:36:39.301334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.225 [2024-11-17 14:36:39.313434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.225 [2024-11-17 14:36:39.313782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.225 [2024-11-17 14:36:39.313798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.225 [2024-11-17 14:36:39.313806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.225 [2024-11-17 14:36:39.313967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.225 [2024-11-17 14:36:39.314130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.225 [2024-11-17 14:36:39.314139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.225 [2024-11-17 14:36:39.314145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.225 [2024-11-17 14:36:39.314153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.225 [2024-11-17 14:36:39.326252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.225 [2024-11-17 14:36:39.326674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.225 [2024-11-17 14:36:39.326691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.225 [2024-11-17 14:36:39.326699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.225 [2024-11-17 14:36:39.326862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.225 [2024-11-17 14:36:39.327026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.225 [2024-11-17 14:36:39.327035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.225 [2024-11-17 14:36:39.327044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.225 [2024-11-17 14:36:39.327051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.225 [2024-11-17 14:36:39.339165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.226 [2024-11-17 14:36:39.339516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.226 [2024-11-17 14:36:39.339532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.226 [2024-11-17 14:36:39.339540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.226 [2024-11-17 14:36:39.339702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.226 [2024-11-17 14:36:39.339867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.226 [2024-11-17 14:36:39.339876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.226 [2024-11-17 14:36:39.339882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.226 [2024-11-17 14:36:39.339888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.226 [2024-11-17 14:36:39.351997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.226 [2024-11-17 14:36:39.352422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.226 [2024-11-17 14:36:39.352468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.226 [2024-11-17 14:36:39.352492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.226 [2024-11-17 14:36:39.353071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.226 [2024-11-17 14:36:39.353505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.226 [2024-11-17 14:36:39.353514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.226 [2024-11-17 14:36:39.353521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.226 [2024-11-17 14:36:39.353527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.226 [2024-11-17 14:36:39.364803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.226 [2024-11-17 14:36:39.365225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.226 [2024-11-17 14:36:39.365275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.226 [2024-11-17 14:36:39.365299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.226 [2024-11-17 14:36:39.365894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.226 [2024-11-17 14:36:39.366489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.226 [2024-11-17 14:36:39.366517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.226 [2024-11-17 14:36:39.366538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.226 [2024-11-17 14:36:39.366558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.226 [2024-11-17 14:36:39.377739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.226 [2024-11-17 14:36:39.378163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.226 [2024-11-17 14:36:39.378209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.226 [2024-11-17 14:36:39.378233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.226 [2024-11-17 14:36:39.378745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.226 [2024-11-17 14:36:39.378911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.226 [2024-11-17 14:36:39.378920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.226 [2024-11-17 14:36:39.378927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.226 [2024-11-17 14:36:39.378933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.226 [2024-11-17 14:36:39.390582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.226 [2024-11-17 14:36:39.390923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.226 [2024-11-17 14:36:39.390940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.226 [2024-11-17 14:36:39.390948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.226 [2024-11-17 14:36:39.391111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.226 [2024-11-17 14:36:39.391274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.226 [2024-11-17 14:36:39.391283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.226 [2024-11-17 14:36:39.391289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.226 [2024-11-17 14:36:39.391296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.226 [2024-11-17 14:36:39.403498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.226 [2024-11-17 14:36:39.403915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.226 [2024-11-17 14:36:39.403931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.226 [2024-11-17 14:36:39.403939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.226 [2024-11-17 14:36:39.404102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.226 [2024-11-17 14:36:39.404266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.226 [2024-11-17 14:36:39.404275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.226 [2024-11-17 14:36:39.404281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.226 [2024-11-17 14:36:39.404287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.226 [2024-11-17 14:36:39.416400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.226 [2024-11-17 14:36:39.416800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.226 [2024-11-17 14:36:39.416817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.226 [2024-11-17 14:36:39.416828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.226 [2024-11-17 14:36:39.416992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.226 [2024-11-17 14:36:39.417156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.226 [2024-11-17 14:36:39.417165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.226 [2024-11-17 14:36:39.417171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.226 [2024-11-17 14:36:39.417178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.226 [2024-11-17 14:36:39.429283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.226 [2024-11-17 14:36:39.429681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.226 [2024-11-17 14:36:39.429699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.226 [2024-11-17 14:36:39.429706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.226 [2024-11-17 14:36:39.429871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.226 [2024-11-17 14:36:39.430035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.226 [2024-11-17 14:36:39.430046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.226 [2024-11-17 14:36:39.430052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.226 [2024-11-17 14:36:39.430059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.226 [2024-11-17 14:36:39.442513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.226 [2024-11-17 14:36:39.442850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.226 [2024-11-17 14:36:39.442868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.226 [2024-11-17 14:36:39.442876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.226 [2024-11-17 14:36:39.443054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.226 [2024-11-17 14:36:39.443232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.226 [2024-11-17 14:36:39.443243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.226 [2024-11-17 14:36:39.443250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.226 [2024-11-17 14:36:39.443259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.499 [2024-11-17 14:36:39.455514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.499 [2024-11-17 14:36:39.455833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.499 [2024-11-17 14:36:39.455850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.499 [2024-11-17 14:36:39.455858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.499 [2024-11-17 14:36:39.456021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.499 [2024-11-17 14:36:39.456187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.499 [2024-11-17 14:36:39.456196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.499 [2024-11-17 14:36:39.456202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.499 [2024-11-17 14:36:39.456208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.500 [2024-11-17 14:36:39.468406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.500 [2024-11-17 14:36:39.468773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.500 [2024-11-17 14:36:39.468824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.500 [2024-11-17 14:36:39.468847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.500 [2024-11-17 14:36:39.469440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.500 [2024-11-17 14:36:39.469993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.500 [2024-11-17 14:36:39.470002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.500 [2024-11-17 14:36:39.470009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.500 [2024-11-17 14:36:39.470015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.500 [2024-11-17 14:36:39.481212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.500 [2024-11-17 14:36:39.481646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.500 [2024-11-17 14:36:39.481691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.500 [2024-11-17 14:36:39.481715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.500 [2024-11-17 14:36:39.482134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.500 [2024-11-17 14:36:39.482300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.500 [2024-11-17 14:36:39.482310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.500 [2024-11-17 14:36:39.482317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.500 [2024-11-17 14:36:39.482324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.500 [2024-11-17 14:36:39.494301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.500 [2024-11-17 14:36:39.494726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.500 [2024-11-17 14:36:39.494744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.501 [2024-11-17 14:36:39.494751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.501 [2024-11-17 14:36:39.494929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.501 [2024-11-17 14:36:39.495107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.501 [2024-11-17 14:36:39.495115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.501 [2024-11-17 14:36:39.495126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.501 [2024-11-17 14:36:39.495132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.501 [2024-11-17 14:36:39.507146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.501 [2024-11-17 14:36:39.507587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.501 [2024-11-17 14:36:39.507632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.501 [2024-11-17 14:36:39.507656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.502 [2024-11-17 14:36:39.508152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.502 [2024-11-17 14:36:39.508317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.502 [2024-11-17 14:36:39.508326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.502 [2024-11-17 14:36:39.508332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.502 [2024-11-17 14:36:39.508338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.502 7434.25 IOPS, 29.04 MiB/s [2024-11-17T13:36:39.727Z] [2024-11-17 14:36:39.521203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.502 [2024-11-17 14:36:39.521645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.502 [2024-11-17 14:36:39.521690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.502 [2024-11-17 14:36:39.521714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.502 [2024-11-17 14:36:39.522298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.502 [2024-11-17 14:36:39.522898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.502 [2024-11-17 14:36:39.522924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.502 [2024-11-17 14:36:39.522931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.502 [2024-11-17 14:36:39.522937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.502 [2024-11-17 14:36:39.534066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.502 [2024-11-17 14:36:39.534501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.502 [2024-11-17 14:36:39.534548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.502 [2024-11-17 14:36:39.534572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.502 [2024-11-17 14:36:39.535147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.502 [2024-11-17 14:36:39.535311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.503 [2024-11-17 14:36:39.535320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.503 [2024-11-17 14:36:39.535326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.503 [2024-11-17 14:36:39.535332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.503 [2024-11-17 14:36:39.546995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.503 [2024-11-17 14:36:39.547412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.503 [2024-11-17 14:36:39.547453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.503 [2024-11-17 14:36:39.547479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.503 [2024-11-17 14:36:39.547994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.503 [2024-11-17 14:36:39.548159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.503 [2024-11-17 14:36:39.548168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.503 [2024-11-17 14:36:39.548174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.503 [2024-11-17 14:36:39.548180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.503 [2024-11-17 14:36:39.559817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.503 [2024-11-17 14:36:39.560232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.503 [2024-11-17 14:36:39.560249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.503 [2024-11-17 14:36:39.560256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.503 [2024-11-17 14:36:39.560424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.503 [2024-11-17 14:36:39.560589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.503 [2024-11-17 14:36:39.560598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.503 [2024-11-17 14:36:39.560605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.503 [2024-11-17 14:36:39.560611] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.503 [2024-11-17 14:36:39.572718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.503 [2024-11-17 14:36:39.573143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.503 [2024-11-17 14:36:39.573188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.503 [2024-11-17 14:36:39.573212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.503 [2024-11-17 14:36:39.573805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.503 [2024-11-17 14:36:39.574271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.503 [2024-11-17 14:36:39.574280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.503 [2024-11-17 14:36:39.574286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.503 [2024-11-17 14:36:39.574292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.504 [2024-11-17 14:36:39.585642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.504 [2024-11-17 14:36:39.586053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.504 [2024-11-17 14:36:39.586070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.504 [2024-11-17 14:36:39.586081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.504 [2024-11-17 14:36:39.586246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.504 [2024-11-17 14:36:39.586417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.504 [2024-11-17 14:36:39.586427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.504 [2024-11-17 14:36:39.586433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.504 [2024-11-17 14:36:39.586439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.504 [2024-11-17 14:36:39.598529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.504 [2024-11-17 14:36:39.598938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.504 [2024-11-17 14:36:39.598955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.504 [2024-11-17 14:36:39.598963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.504 [2024-11-17 14:36:39.599125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.504 [2024-11-17 14:36:39.599288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.504 [2024-11-17 14:36:39.599297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.504 [2024-11-17 14:36:39.599304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.504 [2024-11-17 14:36:39.599310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.504 [2024-11-17 14:36:39.611400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.504 [2024-11-17 14:36:39.611821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.504 [2024-11-17 14:36:39.611866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.504 [2024-11-17 14:36:39.611889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.505 [2024-11-17 14:36:39.612330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.505 [2024-11-17 14:36:39.612523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.505 [2024-11-17 14:36:39.612533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.505 [2024-11-17 14:36:39.612540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.505 [2024-11-17 14:36:39.612547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.505 [2024-11-17 14:36:39.624263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.505 [2024-11-17 14:36:39.624617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.505 [2024-11-17 14:36:39.624634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.505 [2024-11-17 14:36:39.624641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.505 [2024-11-17 14:36:39.624805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.505 [2024-11-17 14:36:39.624972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.505 [2024-11-17 14:36:39.624982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.505 [2024-11-17 14:36:39.624988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.505 [2024-11-17 14:36:39.624994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.505 [2024-11-17 14:36:39.637086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.506 [2024-11-17 14:36:39.637482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.506 [2024-11-17 14:36:39.637499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.506 [2024-11-17 14:36:39.637507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.506 [2024-11-17 14:36:39.637671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.506 [2024-11-17 14:36:39.637835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.506 [2024-11-17 14:36:39.637844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.506 [2024-11-17 14:36:39.637851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.506 [2024-11-17 14:36:39.637857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.506 [2024-11-17 14:36:39.650002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.506 [2024-11-17 14:36:39.650425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.506 [2024-11-17 14:36:39.650443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.506 [2024-11-17 14:36:39.650451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.506 [2024-11-17 14:36:39.650615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.506 [2024-11-17 14:36:39.650778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.506 [2024-11-17 14:36:39.650787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.506 [2024-11-17 14:36:39.650794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.506 [2024-11-17 14:36:39.650801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.506 [2024-11-17 14:36:39.662908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.506 [2024-11-17 14:36:39.663329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.506 [2024-11-17 14:36:39.663387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.506 [2024-11-17 14:36:39.663413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.506 [2024-11-17 14:36:39.663995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.508 [2024-11-17 14:36:39.664506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.508 [2024-11-17 14:36:39.664516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.508 [2024-11-17 14:36:39.664526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.508 [2024-11-17 14:36:39.664533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.508 [2024-11-17 14:36:39.675717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.508 [2024-11-17 14:36:39.676145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.508 [2024-11-17 14:36:39.676190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.508 [2024-11-17 14:36:39.676214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.508 [2024-11-17 14:36:39.676606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.508 [2024-11-17 14:36:39.676771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.508 [2024-11-17 14:36:39.676780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.508 [2024-11-17 14:36:39.676787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.508 [2024-11-17 14:36:39.676793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.508 [2024-11-17 14:36:39.688547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.508 [2024-11-17 14:36:39.688973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.508 [2024-11-17 14:36:39.689019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.508 [2024-11-17 14:36:39.689043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.508 [2024-11-17 14:36:39.689535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.508 [2024-11-17 14:36:39.689701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.508 [2024-11-17 14:36:39.689710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.508 [2024-11-17 14:36:39.689716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.508 [2024-11-17 14:36:39.689723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.508 [2024-11-17 14:36:39.701470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.508 [2024-11-17 14:36:39.701896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.508 [2024-11-17 14:36:39.701913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.508 [2024-11-17 14:36:39.701921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.508 [2024-11-17 14:36:39.702085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.508 [2024-11-17 14:36:39.702249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.508 [2024-11-17 14:36:39.702258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.508 [2024-11-17 14:36:39.702265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.508 [2024-11-17 14:36:39.702271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.508 [2024-11-17 14:36:39.714491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.768 [2024-11-17 14:36:39.714840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.768 [2024-11-17 14:36:39.714858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.768 [2024-11-17 14:36:39.714866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.768 [2024-11-17 14:36:39.715045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.768 [2024-11-17 14:36:39.715224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.768 [2024-11-17 14:36:39.715234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.768 [2024-11-17 14:36:39.715241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.769 [2024-11-17 14:36:39.715248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.769 [2024-11-17 14:36:39.727401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.769 [2024-11-17 14:36:39.727823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.769 [2024-11-17 14:36:39.727839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.769 [2024-11-17 14:36:39.727847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.769 [2024-11-17 14:36:39.728010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.769 [2024-11-17 14:36:39.728173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.769 [2024-11-17 14:36:39.728182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.769 [2024-11-17 14:36:39.728188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.769 [2024-11-17 14:36:39.728195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.769 [2024-11-17 14:36:39.740343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.769 [2024-11-17 14:36:39.740790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.769 [2024-11-17 14:36:39.740807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.769 [2024-11-17 14:36:39.740814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.769 [2024-11-17 14:36:39.740977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.769 [2024-11-17 14:36:39.741141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.769 [2024-11-17 14:36:39.741151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.769 [2024-11-17 14:36:39.741157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.769 [2024-11-17 14:36:39.741164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.769 [2024-11-17 14:36:39.753466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.769 [2024-11-17 14:36:39.753803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.769 [2024-11-17 14:36:39.753821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.769 [2024-11-17 14:36:39.753832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.769 [2024-11-17 14:36:39.754004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.769 [2024-11-17 14:36:39.754180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.769 [2024-11-17 14:36:39.754190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.769 [2024-11-17 14:36:39.754196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.769 [2024-11-17 14:36:39.754203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.769 [2024-11-17 14:36:39.766338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.769 [2024-11-17 14:36:39.766738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.769 [2024-11-17 14:36:39.766795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.769 [2024-11-17 14:36:39.766818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.769 [2024-11-17 14:36:39.767373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.769 [2024-11-17 14:36:39.767539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.769 [2024-11-17 14:36:39.767548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.769 [2024-11-17 14:36:39.767555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.769 [2024-11-17 14:36:39.767561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.769 [2024-11-17 14:36:39.779152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.769 [2024-11-17 14:36:39.779569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.769 [2024-11-17 14:36:39.779605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.769 [2024-11-17 14:36:39.779631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.769 [2024-11-17 14:36:39.780210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.769 [2024-11-17 14:36:39.780505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.769 [2024-11-17 14:36:39.780514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.769 [2024-11-17 14:36:39.780521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.769 [2024-11-17 14:36:39.780527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.769 [2024-11-17 14:36:39.792006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.769 [2024-11-17 14:36:39.792432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.769 [2024-11-17 14:36:39.792479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.769 [2024-11-17 14:36:39.792502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.769 [2024-11-17 14:36:39.793082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.769 [2024-11-17 14:36:39.793378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.769 [2024-11-17 14:36:39.793387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.769 [2024-11-17 14:36:39.793394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.769 [2024-11-17 14:36:39.793400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.769 [2024-11-17 14:36:39.804927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.769 [2024-11-17 14:36:39.805373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.769 [2024-11-17 14:36:39.805419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.769 [2024-11-17 14:36:39.805443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.769 [2024-11-17 14:36:39.806023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.769 [2024-11-17 14:36:39.806665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.769 [2024-11-17 14:36:39.806674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.769 [2024-11-17 14:36:39.806680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.769 [2024-11-17 14:36:39.806687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.769 [2024-11-17 14:36:39.817730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.769 [2024-11-17 14:36:39.818148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.769 [2024-11-17 14:36:39.818195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.769 [2024-11-17 14:36:39.818220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.769 [2024-11-17 14:36:39.818753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.769 [2024-11-17 14:36:39.819144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.769 [2024-11-17 14:36:39.819163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.769 [2024-11-17 14:36:39.819176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.769 [2024-11-17 14:36:39.819190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.769 [2024-11-17 14:36:39.832609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.769 [2024-11-17 14:36:39.833120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.769 [2024-11-17 14:36:39.833171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.769 [2024-11-17 14:36:39.833194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.769 [2024-11-17 14:36:39.833753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.769 [2024-11-17 14:36:39.834010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.769 [2024-11-17 14:36:39.834023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.769 [2024-11-17 14:36:39.834037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.769 [2024-11-17 14:36:39.834047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.769 [2024-11-17 14:36:39.845564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.769 [2024-11-17 14:36:39.845916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.769 [2024-11-17 14:36:39.845934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.769 [2024-11-17 14:36:39.845941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.769 [2024-11-17 14:36:39.846109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.769 [2024-11-17 14:36:39.846277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.770 [2024-11-17 14:36:39.846287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.770 [2024-11-17 14:36:39.846294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.770 [2024-11-17 14:36:39.846300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.770 [2024-11-17 14:36:39.858420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.770 [2024-11-17 14:36:39.858857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.770 [2024-11-17 14:36:39.858902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.770 [2024-11-17 14:36:39.858926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.770 [2024-11-17 14:36:39.859523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.770 [2024-11-17 14:36:39.859705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.770 [2024-11-17 14:36:39.859713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.770 [2024-11-17 14:36:39.859720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.770 [2024-11-17 14:36:39.859725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.770 [2024-11-17 14:36:39.873342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.770 [2024-11-17 14:36:39.873865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.770 [2024-11-17 14:36:39.873916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.770 [2024-11-17 14:36:39.873939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.770 [2024-11-17 14:36:39.874535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.770 [2024-11-17 14:36:39.874790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.770 [2024-11-17 14:36:39.874803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.770 [2024-11-17 14:36:39.874813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.770 [2024-11-17 14:36:39.874823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.770 [2024-11-17 14:36:39.886282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.770 [2024-11-17 14:36:39.886718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.770 [2024-11-17 14:36:39.886764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.770 [2024-11-17 14:36:39.886787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.770 [2024-11-17 14:36:39.887382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.770 [2024-11-17 14:36:39.887905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.770 [2024-11-17 14:36:39.887914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.770 [2024-11-17 14:36:39.887921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.770 [2024-11-17 14:36:39.887927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.770 [2024-11-17 14:36:39.899166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.770 [2024-11-17 14:36:39.899585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.770 [2024-11-17 14:36:39.899602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.770 [2024-11-17 14:36:39.899609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.770 [2024-11-17 14:36:39.899772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.770 [2024-11-17 14:36:39.899937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.770 [2024-11-17 14:36:39.899946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.770 [2024-11-17 14:36:39.899952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.770 [2024-11-17 14:36:39.899958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.770 [2024-11-17 14:36:39.912106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.770 [2024-11-17 14:36:39.912508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.770 [2024-11-17 14:36:39.912525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.770 [2024-11-17 14:36:39.912533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.770 [2024-11-17 14:36:39.912698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.770 [2024-11-17 14:36:39.912860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.770 [2024-11-17 14:36:39.912870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.770 [2024-11-17 14:36:39.912877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.770 [2024-11-17 14:36:39.912883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.770 [2024-11-17 14:36:39.925009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.770 [2024-11-17 14:36:39.925404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.770 [2024-11-17 14:36:39.925421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.770 [2024-11-17 14:36:39.925434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.770 [2024-11-17 14:36:39.925599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.770 [2024-11-17 14:36:39.925763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.770 [2024-11-17 14:36:39.925773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.770 [2024-11-17 14:36:39.925780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.770 [2024-11-17 14:36:39.925786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.770 [2024-11-17 14:36:39.937885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.770 [2024-11-17 14:36:39.938300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.770 [2024-11-17 14:36:39.938318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.770 [2024-11-17 14:36:39.938325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.770 [2024-11-17 14:36:39.938501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.770 [2024-11-17 14:36:39.938667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.770 [2024-11-17 14:36:39.938676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.770 [2024-11-17 14:36:39.938682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.770 [2024-11-17 14:36:39.938689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.770 [2024-11-17 14:36:39.950789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.770 [2024-11-17 14:36:39.951217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.770 [2024-11-17 14:36:39.951262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.770 [2024-11-17 14:36:39.951286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.770 [2024-11-17 14:36:39.951827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.770 [2024-11-17 14:36:39.951991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.770 [2024-11-17 14:36:39.951999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.770 [2024-11-17 14:36:39.952005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.770 [2024-11-17 14:36:39.952011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.770 [2024-11-17 14:36:39.963641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.770 [2024-11-17 14:36:39.964057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.770 [2024-11-17 14:36:39.964102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.770 [2024-11-17 14:36:39.964127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.770 [2024-11-17 14:36:39.964723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.770 [2024-11-17 14:36:39.964988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.770 [2024-11-17 14:36:39.964997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.770 [2024-11-17 14:36:39.965003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.770 [2024-11-17 14:36:39.965010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.770 [2024-11-17 14:36:39.976491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.770 [2024-11-17 14:36:39.976829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.771 [2024-11-17 14:36:39.976847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:50.771 [2024-11-17 14:36:39.976854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:50.771 [2024-11-17 14:36:39.977017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:50.771 [2024-11-17 14:36:39.977181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.771 [2024-11-17 14:36:39.977190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.771 [2024-11-17 14:36:39.977196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.771 [2024-11-17 14:36:39.977203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.031 [2024-11-17 14:36:39.989585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.031 [2024-11-17 14:36:39.990023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.031 [2024-11-17 14:36:39.990064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.031 [2024-11-17 14:36:39.990090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.031 [2024-11-17 14:36:39.990683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.031 [2024-11-17 14:36:39.991159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.031 [2024-11-17 14:36:39.991177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.031 [2024-11-17 14:36:39.991191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.031 [2024-11-17 14:36:39.991206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.031 [2024-11-17 14:36:40.004611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.031 [2024-11-17 14:36:40.005105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.031 [2024-11-17 14:36:40.005127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.031 [2024-11-17 14:36:40.005138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.031 [2024-11-17 14:36:40.005401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.031 [2024-11-17 14:36:40.005657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.031 [2024-11-17 14:36:40.005670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.031 [2024-11-17 14:36:40.005684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.031 [2024-11-17 14:36:40.005695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.031 [2024-11-17 14:36:40.018696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.031 [2024-11-17 14:36:40.019207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.031 [2024-11-17 14:36:40.019233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.031 [2024-11-17 14:36:40.019248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.031 [2024-11-17 14:36:40.019493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.031 [2024-11-17 14:36:40.019730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.031 [2024-11-17 14:36:40.019746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.031 [2024-11-17 14:36:40.019759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.031 [2024-11-17 14:36:40.019773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.031 [2024-11-17 14:36:40.032140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.031 [2024-11-17 14:36:40.032551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.031 [2024-11-17 14:36:40.032578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.031 [2024-11-17 14:36:40.032593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.031 [2024-11-17 14:36:40.032792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.031 [2024-11-17 14:36:40.032988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.031 [2024-11-17 14:36:40.033004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.031 [2024-11-17 14:36:40.033016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.031 [2024-11-17 14:36:40.033027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.031 [2024-11-17 14:36:40.045852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.031 [2024-11-17 14:36:40.046277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.031 [2024-11-17 14:36:40.046296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.031 [2024-11-17 14:36:40.046305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.031 [2024-11-17 14:36:40.046493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.031 [2024-11-17 14:36:40.046673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.031 [2024-11-17 14:36:40.046683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.031 [2024-11-17 14:36:40.046690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.031 [2024-11-17 14:36:40.046697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.031 [2024-11-17 14:36:40.058942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.031 [2024-11-17 14:36:40.059381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.031 [2024-11-17 14:36:40.059399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.031 [2024-11-17 14:36:40.059408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.031 [2024-11-17 14:36:40.059587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.031 [2024-11-17 14:36:40.059767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.031 [2024-11-17 14:36:40.059776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.031 [2024-11-17 14:36:40.059783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.031 [2024-11-17 14:36:40.059790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.031 [2024-11-17 14:36:40.072148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.031 [2024-11-17 14:36:40.072564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.031 [2024-11-17 14:36:40.072583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.031 [2024-11-17 14:36:40.072591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.031 [2024-11-17 14:36:40.072770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.031 [2024-11-17 14:36:40.072948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.031 [2024-11-17 14:36:40.072958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.031 [2024-11-17 14:36:40.072965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.031 [2024-11-17 14:36:40.072972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.031 [2024-11-17 14:36:40.085345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.031 [2024-11-17 14:36:40.085773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.031 [2024-11-17 14:36:40.085791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.031 [2024-11-17 14:36:40.085799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.031 [2024-11-17 14:36:40.085977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.031 [2024-11-17 14:36:40.086156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.031 [2024-11-17 14:36:40.086164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.031 [2024-11-17 14:36:40.086172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.031 [2024-11-17 14:36:40.086178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.032 [2024-11-17 14:36:40.099506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.032 [2024-11-17 14:36:40.099992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.032 [2024-11-17 14:36:40.100012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.032 [2024-11-17 14:36:40.100025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.032 [2024-11-17 14:36:40.100223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.032 [2024-11-17 14:36:40.100429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.032 [2024-11-17 14:36:40.100441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.032 [2024-11-17 14:36:40.100450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.032 [2024-11-17 14:36:40.100458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.032 [2024-11-17 14:36:40.112579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.032 [2024-11-17 14:36:40.113011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.032 [2024-11-17 14:36:40.113028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.032 [2024-11-17 14:36:40.113036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.032 [2024-11-17 14:36:40.113199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.032 [2024-11-17 14:36:40.113371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.032 [2024-11-17 14:36:40.113380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.032 [2024-11-17 14:36:40.113387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.032 [2024-11-17 14:36:40.113395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.032 [2024-11-17 14:36:40.125638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.032 [2024-11-17 14:36:40.126066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.032 [2024-11-17 14:36:40.126083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.032 [2024-11-17 14:36:40.126091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.032 [2024-11-17 14:36:40.126264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.032 [2024-11-17 14:36:40.126444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.032 [2024-11-17 14:36:40.126454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.032 [2024-11-17 14:36:40.126461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.032 [2024-11-17 14:36:40.126468] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.032 [2024-11-17 14:36:40.138611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.032 [2024-11-17 14:36:40.139049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.032 [2024-11-17 14:36:40.139094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.032 [2024-11-17 14:36:40.139118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.032 [2024-11-17 14:36:40.139529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.032 [2024-11-17 14:36:40.139707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.032 [2024-11-17 14:36:40.139716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.032 [2024-11-17 14:36:40.139723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.032 [2024-11-17 14:36:40.139729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.032 [2024-11-17 14:36:40.151597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.032 [2024-11-17 14:36:40.151949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.032 [2024-11-17 14:36:40.151967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.032 [2024-11-17 14:36:40.151975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.032 [2024-11-17 14:36:40.152148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.032 [2024-11-17 14:36:40.152322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.032 [2024-11-17 14:36:40.152331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.032 [2024-11-17 14:36:40.152338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.032 [2024-11-17 14:36:40.152344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.032 [2024-11-17 14:36:40.164591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.032 [2024-11-17 14:36:40.165028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.032 [2024-11-17 14:36:40.165072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.032 [2024-11-17 14:36:40.165095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.032 [2024-11-17 14:36:40.165613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.032 [2024-11-17 14:36:40.165788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.032 [2024-11-17 14:36:40.165798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.032 [2024-11-17 14:36:40.165804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.032 [2024-11-17 14:36:40.165812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.032 [2024-11-17 14:36:40.177654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.032 [2024-11-17 14:36:40.178004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.032 [2024-11-17 14:36:40.178021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.032 [2024-11-17 14:36:40.178029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.032 [2024-11-17 14:36:40.178201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.032 [2024-11-17 14:36:40.178380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.032 [2024-11-17 14:36:40.178390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.032 [2024-11-17 14:36:40.178401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.032 [2024-11-17 14:36:40.178408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.032 [2024-11-17 14:36:40.190732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.032 [2024-11-17 14:36:40.191158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.032 [2024-11-17 14:36:40.191175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.032 [2024-11-17 14:36:40.191183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.032 [2024-11-17 14:36:40.191363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.032 [2024-11-17 14:36:40.191537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.032 [2024-11-17 14:36:40.191546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.032 [2024-11-17 14:36:40.191553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.032 [2024-11-17 14:36:40.191560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.032 [2024-11-17 14:36:40.203793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.032 [2024-11-17 14:36:40.204156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.032 [2024-11-17 14:36:40.204200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.032 [2024-11-17 14:36:40.204223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.032 [2024-11-17 14:36:40.204710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.032 [2024-11-17 14:36:40.204876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.032 [2024-11-17 14:36:40.204885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.032 [2024-11-17 14:36:40.204892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.032 [2024-11-17 14:36:40.204899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.032 [2024-11-17 14:36:40.216817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.032 [2024-11-17 14:36:40.217241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.032 [2024-11-17 14:36:40.217258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.032 [2024-11-17 14:36:40.217266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.032 [2024-11-17 14:36:40.217445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.032 [2024-11-17 14:36:40.217619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.032 [2024-11-17 14:36:40.217629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.033 [2024-11-17 14:36:40.217636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.033 [2024-11-17 14:36:40.217643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.033 [2024-11-17 14:36:40.229833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.033 [2024-11-17 14:36:40.230265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.033 [2024-11-17 14:36:40.230310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.033 [2024-11-17 14:36:40.230333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.033 [2024-11-17 14:36:40.230786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.033 [2024-11-17 14:36:40.230960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.033 [2024-11-17 14:36:40.230970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.033 [2024-11-17 14:36:40.230976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.033 [2024-11-17 14:36:40.230984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.033 [2024-11-17 14:36:40.242925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.033 [2024-11-17 14:36:40.243302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.033 [2024-11-17 14:36:40.243369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.033 [2024-11-17 14:36:40.243396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.033 [2024-11-17 14:36:40.243899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.033 [2024-11-17 14:36:40.244074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.033 [2024-11-17 14:36:40.244083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.033 [2024-11-17 14:36:40.244091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.033 [2024-11-17 14:36:40.244098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.294 [2024-11-17 14:36:40.256127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.294 [2024-11-17 14:36:40.256531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.294 [2024-11-17 14:36:40.256579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.294 [2024-11-17 14:36:40.256604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.294 [2024-11-17 14:36:40.257185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.294 [2024-11-17 14:36:40.257763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.294 [2024-11-17 14:36:40.257773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.294 [2024-11-17 14:36:40.257779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.294 [2024-11-17 14:36:40.257786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.294 [2024-11-17 14:36:40.269292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.294 [2024-11-17 14:36:40.269662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.294 [2024-11-17 14:36:40.269680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.294 [2024-11-17 14:36:40.269692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.294 [2024-11-17 14:36:40.269871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.294 [2024-11-17 14:36:40.270050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.294 [2024-11-17 14:36:40.270060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.294 [2024-11-17 14:36:40.270066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.294 [2024-11-17 14:36:40.270074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.294 [2024-11-17 14:36:40.282470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.294 [2024-11-17 14:36:40.282902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.294 [2024-11-17 14:36:40.282919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.294 [2024-11-17 14:36:40.282928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.294 [2024-11-17 14:36:40.283106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.294 [2024-11-17 14:36:40.283285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.294 [2024-11-17 14:36:40.283294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.294 [2024-11-17 14:36:40.283301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.294 [2024-11-17 14:36:40.283307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.294 [2024-11-17 14:36:40.295566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.294 [2024-11-17 14:36:40.295899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.294 [2024-11-17 14:36:40.295916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.294 [2024-11-17 14:36:40.295924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.294 [2024-11-17 14:36:40.296096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.294 [2024-11-17 14:36:40.296269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.294 [2024-11-17 14:36:40.296278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.294 [2024-11-17 14:36:40.296285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.294 [2024-11-17 14:36:40.296292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.294 [2024-11-17 14:36:40.308630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.294 [2024-11-17 14:36:40.308987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.294 [2024-11-17 14:36:40.309003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.294 [2024-11-17 14:36:40.309012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.294 [2024-11-17 14:36:40.309185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.294 [2024-11-17 14:36:40.309369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.294 [2024-11-17 14:36:40.309380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.294 [2024-11-17 14:36:40.309386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.294 [2024-11-17 14:36:40.309394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.294 [2024-11-17 14:36:40.321735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.294 [2024-11-17 14:36:40.322046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.294 [2024-11-17 14:36:40.322063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.294 [2024-11-17 14:36:40.322071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.294 [2024-11-17 14:36:40.322250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.294 [2024-11-17 14:36:40.322434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.294 [2024-11-17 14:36:40.322445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.294 [2024-11-17 14:36:40.322452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.294 [2024-11-17 14:36:40.322459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.294 [2024-11-17 14:36:40.334819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.294 [2024-11-17 14:36:40.335246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.294 [2024-11-17 14:36:40.335264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.294 [2024-11-17 14:36:40.335272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.294 [2024-11-17 14:36:40.335453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.294 [2024-11-17 14:36:40.335627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.294 [2024-11-17 14:36:40.335636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.294 [2024-11-17 14:36:40.335643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.294 [2024-11-17 14:36:40.335650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.294 [2024-11-17 14:36:40.347824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.294 [2024-11-17 14:36:40.348228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.294 [2024-11-17 14:36:40.348246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.294 [2024-11-17 14:36:40.348254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.294 [2024-11-17 14:36:40.348433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.294 [2024-11-17 14:36:40.348606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.294 [2024-11-17 14:36:40.348616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.294 [2024-11-17 14:36:40.348626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.294 [2024-11-17 14:36:40.348633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.294 [2024-11-17 14:36:40.360794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.294 [2024-11-17 14:36:40.361245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.294 [2024-11-17 14:36:40.361297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.294 [2024-11-17 14:36:40.361321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.294 [2024-11-17 14:36:40.361884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.294 [2024-11-17 14:36:40.362059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.294 [2024-11-17 14:36:40.362069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.294 [2024-11-17 14:36:40.362075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.294 [2024-11-17 14:36:40.362082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.294 [2024-11-17 14:36:40.373788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.294 [2024-11-17 14:36:40.374200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.295 [2024-11-17 14:36:40.374218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.295 [2024-11-17 14:36:40.374226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.295 [2024-11-17 14:36:40.374407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.295 [2024-11-17 14:36:40.374582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.295 [2024-11-17 14:36:40.374592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.295 [2024-11-17 14:36:40.374598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.295 [2024-11-17 14:36:40.374605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.295 [2024-11-17 14:36:40.386798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.295 [2024-11-17 14:36:40.387159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.295 [2024-11-17 14:36:40.387176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.295 [2024-11-17 14:36:40.387184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.295 [2024-11-17 14:36:40.387363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.295 [2024-11-17 14:36:40.387537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.295 [2024-11-17 14:36:40.387547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.295 [2024-11-17 14:36:40.387554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.295 [2024-11-17 14:36:40.387561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.295 [2024-11-17 14:36:40.399790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.295 [2024-11-17 14:36:40.400224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.295 [2024-11-17 14:36:40.400268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.295 [2024-11-17 14:36:40.400293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.295 [2024-11-17 14:36:40.400816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.295 [2024-11-17 14:36:40.400991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.295 [2024-11-17 14:36:40.401001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.295 [2024-11-17 14:36:40.401008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.295 [2024-11-17 14:36:40.401014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.295 [2024-11-17 14:36:40.412858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.295 [2024-11-17 14:36:40.413312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.295 [2024-11-17 14:36:40.413370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.295 [2024-11-17 14:36:40.413396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.295 [2024-11-17 14:36:40.413976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.295 [2024-11-17 14:36:40.414380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.295 [2024-11-17 14:36:40.414391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.295 [2024-11-17 14:36:40.414398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.295 [2024-11-17 14:36:40.414404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.295 [2024-11-17 14:36:40.425932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.295 [2024-11-17 14:36:40.426347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.295 [2024-11-17 14:36:40.426406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.295 [2024-11-17 14:36:40.426430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.295 [2024-11-17 14:36:40.426941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.295 [2024-11-17 14:36:40.427125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.295 [2024-11-17 14:36:40.427135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.295 [2024-11-17 14:36:40.427141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.295 [2024-11-17 14:36:40.427147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.295 [2024-11-17 14:36:40.438868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.295 [2024-11-17 14:36:40.439212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.295 [2024-11-17 14:36:40.439229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.295 [2024-11-17 14:36:40.439241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.295 [2024-11-17 14:36:40.439409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.295 [2024-11-17 14:36:40.439596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.295 [2024-11-17 14:36:40.439606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.295 [2024-11-17 14:36:40.439613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.295 [2024-11-17 14:36:40.439619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.295 [2024-11-17 14:36:40.451843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.295 [2024-11-17 14:36:40.452262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.295 [2024-11-17 14:36:40.452279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.295 [2024-11-17 14:36:40.452286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.295 [2024-11-17 14:36:40.452467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.295 [2024-11-17 14:36:40.452640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.295 [2024-11-17 14:36:40.452649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.295 [2024-11-17 14:36:40.452656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.295 [2024-11-17 14:36:40.452662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.295 [2024-11-17 14:36:40.464876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.295 [2024-11-17 14:36:40.465278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.295 [2024-11-17 14:36:40.465335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.295 [2024-11-17 14:36:40.465371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.295 [2024-11-17 14:36:40.465952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.295 [2024-11-17 14:36:40.466173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.295 [2024-11-17 14:36:40.466182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.295 [2024-11-17 14:36:40.466189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.295 [2024-11-17 14:36:40.466195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.295 [2024-11-17 14:36:40.477931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.295 [2024-11-17 14:36:40.478363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.295 [2024-11-17 14:36:40.478382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.295 [2024-11-17 14:36:40.478391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.295 [2024-11-17 14:36:40.478569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.295 [2024-11-17 14:36:40.478750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.295 [2024-11-17 14:36:40.478760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.295 [2024-11-17 14:36:40.478768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.295 [2024-11-17 14:36:40.478774] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.295 [2024-11-17 14:36:40.490971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.295 [2024-11-17 14:36:40.491387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.295 [2024-11-17 14:36:40.491406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.295 [2024-11-17 14:36:40.491414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.295 [2024-11-17 14:36:40.491593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.295 [2024-11-17 14:36:40.491772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.295 [2024-11-17 14:36:40.491782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.295 [2024-11-17 14:36:40.491788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.295 [2024-11-17 14:36:40.491795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.295 [2024-11-17 14:36:40.504159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.296 [2024-11-17 14:36:40.504583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.296 [2024-11-17 14:36:40.504602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.296 [2024-11-17 14:36:40.504611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.296 [2024-11-17 14:36:40.504791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.296 [2024-11-17 14:36:40.504970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.296 [2024-11-17 14:36:40.504980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.296 [2024-11-17 14:36:40.504988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.296 [2024-11-17 14:36:40.504996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.557 [2024-11-17 14:36:40.517348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.557 [2024-11-17 14:36:40.517788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.557 [2024-11-17 14:36:40.517805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.557 [2024-11-17 14:36:40.517813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.557 [2024-11-17 14:36:40.517992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.557 [2024-11-17 14:36:40.518189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.557 [2024-11-17 14:36:40.518199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.557 [2024-11-17 14:36:40.518209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.557 [2024-11-17 14:36:40.518217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.557 5947.40 IOPS, 23.23 MiB/s [2024-11-17T13:36:40.782Z] [2024-11-17 14:36:40.530427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.557 [2024-11-17 14:36:40.530884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.557 [2024-11-17 14:36:40.530902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.557 [2024-11-17 14:36:40.530910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.557 [2024-11-17 14:36:40.531089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.557 [2024-11-17 14:36:40.531267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.557 [2024-11-17 14:36:40.531277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.557 [2024-11-17 14:36:40.531285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.557 [2024-11-17 14:36:40.531292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.557 [2024-11-17 14:36:40.543530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.557 [2024-11-17 14:36:40.543953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.557 [2024-11-17 14:36:40.543971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.557 [2024-11-17 14:36:40.543979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.557 [2024-11-17 14:36:40.544158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.557 [2024-11-17 14:36:40.544336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.557 [2024-11-17 14:36:40.544346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.557 [2024-11-17 14:36:40.544360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.557 [2024-11-17 14:36:40.544367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.557 [2024-11-17 14:36:40.556725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.557 [2024-11-17 14:36:40.557132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.557 [2024-11-17 14:36:40.557149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.557 [2024-11-17 14:36:40.557158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.557 [2024-11-17 14:36:40.557336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.557 [2024-11-17 14:36:40.557520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.557 [2024-11-17 14:36:40.557530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.557 [2024-11-17 14:36:40.557537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.557 [2024-11-17 14:36:40.557544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.557 [2024-11-17 14:36:40.569932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.557 [2024-11-17 14:36:40.570364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.557 [2024-11-17 14:36:40.570383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.557 [2024-11-17 14:36:40.570391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.557 [2024-11-17 14:36:40.570570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.557 [2024-11-17 14:36:40.570748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.557 [2024-11-17 14:36:40.570758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.558 [2024-11-17 14:36:40.570765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.558 [2024-11-17 14:36:40.570772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.558 [2024-11-17 14:36:40.582991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.558 [2024-11-17 14:36:40.583420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.558 [2024-11-17 14:36:40.583439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.558 [2024-11-17 14:36:40.583447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.558 [2024-11-17 14:36:40.583626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.558 [2024-11-17 14:36:40.583806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.558 [2024-11-17 14:36:40.583816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.558 [2024-11-17 14:36:40.583822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.558 [2024-11-17 14:36:40.583829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.558 [2024-11-17 14:36:40.596194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.558 [2024-11-17 14:36:40.596561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.558 [2024-11-17 14:36:40.596579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.558 [2024-11-17 14:36:40.596587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.558 [2024-11-17 14:36:40.596766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.558 [2024-11-17 14:36:40.596944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.558 [2024-11-17 14:36:40.596954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.558 [2024-11-17 14:36:40.596961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.558 [2024-11-17 14:36:40.596968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.558 [2024-11-17 14:36:40.609309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.558 [2024-11-17 14:36:40.609722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.558 [2024-11-17 14:36:40.609740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.558 [2024-11-17 14:36:40.609752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.558 [2024-11-17 14:36:40.609930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.558 [2024-11-17 14:36:40.610108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.558 [2024-11-17 14:36:40.610118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.558 [2024-11-17 14:36:40.610125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.558 [2024-11-17 14:36:40.610131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.558 [2024-11-17 14:36:40.622531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.558 [2024-11-17 14:36:40.622950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.558 [2024-11-17 14:36:40.622969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.558 [2024-11-17 14:36:40.622977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.558 [2024-11-17 14:36:40.623161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.558 [2024-11-17 14:36:40.623346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.558 [2024-11-17 14:36:40.623363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.558 [2024-11-17 14:36:40.623371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.558 [2024-11-17 14:36:40.623380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.558 [2024-11-17 14:36:40.635590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.558 [2024-11-17 14:36:40.636026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.558 [2024-11-17 14:36:40.636044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.558 [2024-11-17 14:36:40.636052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.558 [2024-11-17 14:36:40.636230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.558 [2024-11-17 14:36:40.636413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.558 [2024-11-17 14:36:40.636423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.558 [2024-11-17 14:36:40.636430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.558 [2024-11-17 14:36:40.636436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.558 [2024-11-17 14:36:40.648665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.558 [2024-11-17 14:36:40.649117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.558 [2024-11-17 14:36:40.649165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.558 [2024-11-17 14:36:40.649190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.558 [2024-11-17 14:36:40.649708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.558 [2024-11-17 14:36:40.649892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.558 [2024-11-17 14:36:40.649901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.558 [2024-11-17 14:36:40.649907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.558 [2024-11-17 14:36:40.649914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.558 [2024-11-17 14:36:40.661640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.558 [2024-11-17 14:36:40.662007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.558 [2024-11-17 14:36:40.662052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.558 [2024-11-17 14:36:40.662077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.558 [2024-11-17 14:36:40.662547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.558 [2024-11-17 14:36:40.662722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.558 [2024-11-17 14:36:40.662732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.558 [2024-11-17 14:36:40.662740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.558 [2024-11-17 14:36:40.662747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.558 [2024-11-17 14:36:40.674559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.558 [2024-11-17 14:36:40.674953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.558 [2024-11-17 14:36:40.675008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.558 [2024-11-17 14:36:40.675032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.558 [2024-11-17 14:36:40.675625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.558 [2024-11-17 14:36:40.675806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.558 [2024-11-17 14:36:40.675815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.558 [2024-11-17 14:36:40.675821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.558 [2024-11-17 14:36:40.675828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.558 [2024-11-17 14:36:40.687411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.558 [2024-11-17 14:36:40.687781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.558 [2024-11-17 14:36:40.687827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.558 [2024-11-17 14:36:40.687852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.558 [2024-11-17 14:36:40.688450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.558 [2024-11-17 14:36:40.688957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.558 [2024-11-17 14:36:40.688966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.558 [2024-11-17 14:36:40.688976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.558 [2024-11-17 14:36:40.688983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.558 [2024-11-17 14:36:40.700322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.558 [2024-11-17 14:36:40.700654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.558 [2024-11-17 14:36:40.700672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.558 [2024-11-17 14:36:40.700680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.558 [2024-11-17 14:36:40.700844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.559 [2024-11-17 14:36:40.701007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.559 [2024-11-17 14:36:40.701016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.559 [2024-11-17 14:36:40.701022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.559 [2024-11-17 14:36:40.701029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.559 [2024-11-17 14:36:40.713127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.559 [2024-11-17 14:36:40.713465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-11-17 14:36:40.713483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.559 [2024-11-17 14:36:40.713490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.559 [2024-11-17 14:36:40.713653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.559 [2024-11-17 14:36:40.713816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.559 [2024-11-17 14:36:40.713826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.559 [2024-11-17 14:36:40.713832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.559 [2024-11-17 14:36:40.713838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.559 [2024-11-17 14:36:40.725952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.559 [2024-11-17 14:36:40.726372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-11-17 14:36:40.726390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.559 [2024-11-17 14:36:40.726397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.559 [2024-11-17 14:36:40.726561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.559 [2024-11-17 14:36:40.726724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.559 [2024-11-17 14:36:40.726734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.559 [2024-11-17 14:36:40.726740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.559 [2024-11-17 14:36:40.726746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.559 [2024-11-17 14:36:40.738977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.559 [2024-11-17 14:36:40.739428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-11-17 14:36:40.739475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.559 [2024-11-17 14:36:40.739499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.559 [2024-11-17 14:36:40.739950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.559 [2024-11-17 14:36:40.740124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.559 [2024-11-17 14:36:40.740134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.559 [2024-11-17 14:36:40.740140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.559 [2024-11-17 14:36:40.740147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.559 [2024-11-17 14:36:40.751881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.559 [2024-11-17 14:36:40.752280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-11-17 14:36:40.752297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.559 [2024-11-17 14:36:40.752305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.559 [2024-11-17 14:36:40.752473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.559 [2024-11-17 14:36:40.752638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.559 [2024-11-17 14:36:40.752647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.559 [2024-11-17 14:36:40.752654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.559 [2024-11-17 14:36:40.752661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.559 [2024-11-17 14:36:40.764852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.559 [2024-11-17 14:36:40.765261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.559 [2024-11-17 14:36:40.765278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.559 [2024-11-17 14:36:40.765287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.559 [2024-11-17 14:36:40.765472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.559 [2024-11-17 14:36:40.765657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.559 [2024-11-17 14:36:40.765668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.559 [2024-11-17 14:36:40.765675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.559 [2024-11-17 14:36:40.765682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.820 [2024-11-17 14:36:40.778005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.820 [2024-11-17 14:36:40.778342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.820 [2024-11-17 14:36:40.778365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.820 [2024-11-17 14:36:40.778377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.820 [2024-11-17 14:36:40.778560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.820 [2024-11-17 14:36:40.778724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.820 [2024-11-17 14:36:40.778734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.820 [2024-11-17 14:36:40.778740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.820 [2024-11-17 14:36:40.778746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.820 [2024-11-17 14:36:40.790818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.820 [2024-11-17 14:36:40.791141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.820 [2024-11-17 14:36:40.791158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.820 [2024-11-17 14:36:40.791166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.820 [2024-11-17 14:36:40.791329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.820 [2024-11-17 14:36:40.791497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.820 [2024-11-17 14:36:40.791506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.820 [2024-11-17 14:36:40.791513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.820 [2024-11-17 14:36:40.791520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.820 [2024-11-17 14:36:40.803634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.820 [2024-11-17 14:36:40.804028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.820 [2024-11-17 14:36:40.804044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.820 [2024-11-17 14:36:40.804053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.820 [2024-11-17 14:36:40.804216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.820 [2024-11-17 14:36:40.804384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.820 [2024-11-17 14:36:40.804394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.820 [2024-11-17 14:36:40.804401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.820 [2024-11-17 14:36:40.804407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.820 [2024-11-17 14:36:40.816585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.820 [2024-11-17 14:36:40.816943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.820 [2024-11-17 14:36:40.816961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.820 [2024-11-17 14:36:40.816969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.820 [2024-11-17 14:36:40.817147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.820 [2024-11-17 14:36:40.817330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.820 [2024-11-17 14:36:40.817340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.820 [2024-11-17 14:36:40.817348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.820 [2024-11-17 14:36:40.817363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.820 [2024-11-17 14:36:40.829501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.820 [2024-11-17 14:36:40.829917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.820 [2024-11-17 14:36:40.829934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.820 [2024-11-17 14:36:40.829942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.820 [2024-11-17 14:36:40.830105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.820 [2024-11-17 14:36:40.830269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.820 [2024-11-17 14:36:40.830278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.820 [2024-11-17 14:36:40.830284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.820 [2024-11-17 14:36:40.830290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.821 [2024-11-17 14:36:40.842378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.821 [2024-11-17 14:36:40.842738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.821 [2024-11-17 14:36:40.842755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.821 [2024-11-17 14:36:40.842763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.821 [2024-11-17 14:36:40.842926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.821 [2024-11-17 14:36:40.843090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.821 [2024-11-17 14:36:40.843099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.821 [2024-11-17 14:36:40.843106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.821 [2024-11-17 14:36:40.843112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.821 [2024-11-17 14:36:40.855210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.821 [2024-11-17 14:36:40.855631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.821 [2024-11-17 14:36:40.855648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.821 [2024-11-17 14:36:40.855655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.821 [2024-11-17 14:36:40.855818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.821 [2024-11-17 14:36:40.855982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.821 [2024-11-17 14:36:40.855993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.821 [2024-11-17 14:36:40.856003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.821 [2024-11-17 14:36:40.856011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.821 [2024-11-17 14:36:40.868209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.821 [2024-11-17 14:36:40.868557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.821 [2024-11-17 14:36:40.868574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.821 [2024-11-17 14:36:40.868582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.821 [2024-11-17 14:36:40.868745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.821 [2024-11-17 14:36:40.868908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.821 [2024-11-17 14:36:40.868917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.821 [2024-11-17 14:36:40.868923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.821 [2024-11-17 14:36:40.868932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.821 [2024-11-17 14:36:40.881131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.821 [2024-11-17 14:36:40.881555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.821 [2024-11-17 14:36:40.881601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.821 [2024-11-17 14:36:40.881626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.821 [2024-11-17 14:36:40.882073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.821 [2024-11-17 14:36:40.882239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.821 [2024-11-17 14:36:40.882248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.821 [2024-11-17 14:36:40.882254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.821 [2024-11-17 14:36:40.882260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.821 [2024-11-17 14:36:40.894041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.821 [2024-11-17 14:36:40.894459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.821 [2024-11-17 14:36:40.894477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.821 [2024-11-17 14:36:40.894484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.821 [2024-11-17 14:36:40.894647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.821 [2024-11-17 14:36:40.894811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.821 [2024-11-17 14:36:40.894820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.821 [2024-11-17 14:36:40.894826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.821 [2024-11-17 14:36:40.894832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.821 [2024-11-17 14:36:40.906940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.821 [2024-11-17 14:36:40.907361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.821 [2024-11-17 14:36:40.907411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.821 [2024-11-17 14:36:40.907435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.821 [2024-11-17 14:36:40.908014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.821 [2024-11-17 14:36:40.908228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.821 [2024-11-17 14:36:40.908237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.821 [2024-11-17 14:36:40.908244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.821 [2024-11-17 14:36:40.908250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.821 [2024-11-17 14:36:40.919739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.821 [2024-11-17 14:36:40.920173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.821 [2024-11-17 14:36:40.920217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.821 [2024-11-17 14:36:40.920240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.821 [2024-11-17 14:36:40.920836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.821 [2024-11-17 14:36:40.921408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.821 [2024-11-17 14:36:40.921417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.821 [2024-11-17 14:36:40.921424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.821 [2024-11-17 14:36:40.921431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.821 [2024-11-17 14:36:40.932682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.821 [2024-11-17 14:36:40.933085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.821 [2024-11-17 14:36:40.933102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.821 [2024-11-17 14:36:40.933110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.821 [2024-11-17 14:36:40.933274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.821 [2024-11-17 14:36:40.933463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.821 [2024-11-17 14:36:40.933473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.821 [2024-11-17 14:36:40.933480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.821 [2024-11-17 14:36:40.933487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.821 [2024-11-17 14:36:40.945490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.821 [2024-11-17 14:36:40.945898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.821 [2024-11-17 14:36:40.945914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.821 [2024-11-17 14:36:40.945925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.821 [2024-11-17 14:36:40.946090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.821 [2024-11-17 14:36:40.946253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.821 [2024-11-17 14:36:40.946262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.821 [2024-11-17 14:36:40.946268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.821 [2024-11-17 14:36:40.946274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.821 [2024-11-17 14:36:40.958382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.821 [2024-11-17 14:36:40.958735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.821 [2024-11-17 14:36:40.958780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.821 [2024-11-17 14:36:40.958803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.821 [2024-11-17 14:36:40.959225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.821 [2024-11-17 14:36:40.959395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.821 [2024-11-17 14:36:40.959421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.822 [2024-11-17 14:36:40.959427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.822 [2024-11-17 14:36:40.959434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.822 [2024-11-17 14:36:40.971305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.822 [2024-11-17 14:36:40.971726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.822 [2024-11-17 14:36:40.971743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.822 [2024-11-17 14:36:40.971751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.822 [2024-11-17 14:36:40.971913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.822 [2024-11-17 14:36:40.972076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.822 [2024-11-17 14:36:40.972086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.822 [2024-11-17 14:36:40.972092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.822 [2024-11-17 14:36:40.972098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.822 [2024-11-17 14:36:40.984180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.822 [2024-11-17 14:36:40.984597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.822 [2024-11-17 14:36:40.984614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.822 [2024-11-17 14:36:40.984622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.822 [2024-11-17 14:36:40.984794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.822 [2024-11-17 14:36:40.984970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.822 [2024-11-17 14:36:40.984980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.822 [2024-11-17 14:36:40.984987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.822 [2024-11-17 14:36:40.984993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.822 [2024-11-17 14:36:40.996966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.822 [2024-11-17 14:36:40.997390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.822 [2024-11-17 14:36:40.997435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.822 [2024-11-17 14:36:40.997459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.822 [2024-11-17 14:36:40.997872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.822 [2024-11-17 14:36:40.998037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.822 [2024-11-17 14:36:40.998046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.822 [2024-11-17 14:36:40.998053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.822 [2024-11-17 14:36:40.998059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.822 [2024-11-17 14:36:41.009845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.822 [2024-11-17 14:36:41.010275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.822 [2024-11-17 14:36:41.010319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.822 [2024-11-17 14:36:41.010343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.822 [2024-11-17 14:36:41.010894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.822 [2024-11-17 14:36:41.011283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.822 [2024-11-17 14:36:41.011301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.822 [2024-11-17 14:36:41.011316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.822 [2024-11-17 14:36:41.011329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.822 [2024-11-17 14:36:41.024872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.822 [2024-11-17 14:36:41.025388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.822 [2024-11-17 14:36:41.025411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.822 [2024-11-17 14:36:41.025423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.822 [2024-11-17 14:36:41.025678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.822 [2024-11-17 14:36:41.025936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.822 [2024-11-17 14:36:41.025949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.822 [2024-11-17 14:36:41.025964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.822 [2024-11-17 14:36:41.025975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.822 [2024-11-17 14:36:41.037949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.822 [2024-11-17 14:36:41.038364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.822 [2024-11-17 14:36:41.038382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:51.822 [2024-11-17 14:36:41.038390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:51.822 [2024-11-17 14:36:41.038568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:51.822 [2024-11-17 14:36:41.038746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.822 [2024-11-17 14:36:41.038755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.822 [2024-11-17 14:36:41.038762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.822 [2024-11-17 14:36:41.038769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.083 [2024-11-17 14:36:41.050959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.083 [2024-11-17 14:36:41.051317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.083 [2024-11-17 14:36:41.051333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.083 [2024-11-17 14:36:41.051341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.083 [2024-11-17 14:36:41.051512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.083 [2024-11-17 14:36:41.051676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.083 [2024-11-17 14:36:41.051686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.083 [2024-11-17 14:36:41.051692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.083 [2024-11-17 14:36:41.051698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.083 [2024-11-17 14:36:41.063897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.083 [2024-11-17 14:36:41.064311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.083 [2024-11-17 14:36:41.064328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.083 [2024-11-17 14:36:41.064335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.083 [2024-11-17 14:36:41.064504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.083 [2024-11-17 14:36:41.064668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.083 [2024-11-17 14:36:41.064678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.083 [2024-11-17 14:36:41.064684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.083 [2024-11-17 14:36:41.064690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.083 [2024-11-17 14:36:41.076797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.083 [2024-11-17 14:36:41.077224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.083 [2024-11-17 14:36:41.077268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.083 [2024-11-17 14:36:41.077292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.083 [2024-11-17 14:36:41.077796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.083 [2024-11-17 14:36:41.077961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.083 [2024-11-17 14:36:41.077970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.083 [2024-11-17 14:36:41.077976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.083 [2024-11-17 14:36:41.077982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.083 [2024-11-17 14:36:41.089706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.083 [2024-11-17 14:36:41.090113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.083 [2024-11-17 14:36:41.090130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.083 [2024-11-17 14:36:41.090137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.083 [2024-11-17 14:36:41.090300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.083 [2024-11-17 14:36:41.090470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.083 [2024-11-17 14:36:41.090480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.083 [2024-11-17 14:36:41.090486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.083 [2024-11-17 14:36:41.090492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.083 [2024-11-17 14:36:41.102581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.083 [2024-11-17 14:36:41.102979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.084 [2024-11-17 14:36:41.102996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.084 [2024-11-17 14:36:41.103003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.084 [2024-11-17 14:36:41.103167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.084 [2024-11-17 14:36:41.103331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.084 [2024-11-17 14:36:41.103340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.084 [2024-11-17 14:36:41.103346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.084 [2024-11-17 14:36:41.103358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.084 [2024-11-17 14:36:41.115464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.084 [2024-11-17 14:36:41.115896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.084 [2024-11-17 14:36:41.115941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.084 [2024-11-17 14:36:41.115972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.084 [2024-11-17 14:36:41.116570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.084 [2024-11-17 14:36:41.117074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.084 [2024-11-17 14:36:41.117083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.084 [2024-11-17 14:36:41.117090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.084 [2024-11-17 14:36:41.117097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.084 [2024-11-17 14:36:41.128381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.084 [2024-11-17 14:36:41.128793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.084 [2024-11-17 14:36:41.128810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.084 [2024-11-17 14:36:41.128817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.084 [2024-11-17 14:36:41.128981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.084 [2024-11-17 14:36:41.129144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.084 [2024-11-17 14:36:41.129152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.084 [2024-11-17 14:36:41.129159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.084 [2024-11-17 14:36:41.129165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.084 [2024-11-17 14:36:41.141261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.084 [2024-11-17 14:36:41.141608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.084 [2024-11-17 14:36:41.141625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.084 [2024-11-17 14:36:41.141633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.084 [2024-11-17 14:36:41.141795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.084 [2024-11-17 14:36:41.141959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.084 [2024-11-17 14:36:41.141968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.084 [2024-11-17 14:36:41.141974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.084 [2024-11-17 14:36:41.141981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.084 [2024-11-17 14:36:41.154075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.084 [2024-11-17 14:36:41.154500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.084 [2024-11-17 14:36:41.154546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.084 [2024-11-17 14:36:41.154570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.084 [2024-11-17 14:36:41.154769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.084 [2024-11-17 14:36:41.154938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.084 [2024-11-17 14:36:41.154947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.084 [2024-11-17 14:36:41.154953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.084 [2024-11-17 14:36:41.154959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.084 [2024-11-17 14:36:41.166901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.084 [2024-11-17 14:36:41.167320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.084 [2024-11-17 14:36:41.167336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.084 [2024-11-17 14:36:41.167344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.084 [2024-11-17 14:36:41.167512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.084 [2024-11-17 14:36:41.167677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.084 [2024-11-17 14:36:41.167687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.084 [2024-11-17 14:36:41.167693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.084 [2024-11-17 14:36:41.167699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.084 [2024-11-17 14:36:41.179789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.084 [2024-11-17 14:36:41.180140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.084 [2024-11-17 14:36:41.180184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.084 [2024-11-17 14:36:41.180208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.084 [2024-11-17 14:36:41.180758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.084 [2024-11-17 14:36:41.180923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.084 [2024-11-17 14:36:41.180932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.084 [2024-11-17 14:36:41.180938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.084 [2024-11-17 14:36:41.180944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.084 [2024-11-17 14:36:41.192676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.084 [2024-11-17 14:36:41.193006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.084 [2024-11-17 14:36:41.193023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.084 [2024-11-17 14:36:41.193030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.084 [2024-11-17 14:36:41.193194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.084 [2024-11-17 14:36:41.193365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.084 [2024-11-17 14:36:41.193375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.084 [2024-11-17 14:36:41.193387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.084 [2024-11-17 14:36:41.193394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1616021 Killed "${NVMF_APP[@]}" "$@" 00:26:52.084 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:52.084 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:52.084 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:52.084 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:52.084 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.084 [2024-11-17 14:36:41.205868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.084 [2024-11-17 14:36:41.206295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.084 [2024-11-17 14:36:41.206313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.084 [2024-11-17 14:36:41.206320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.084 [2024-11-17 14:36:41.206504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.084 [2024-11-17 14:36:41.206682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.084 [2024-11-17 14:36:41.206692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.084 [2024-11-17 14:36:41.206699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.084 [2024-11-17 14:36:41.206705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.084 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1617290 00:26:52.084 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1617290 00:26:52.084 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:52.084 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1617290 ']' 00:26:52.085 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.085 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:52.085 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.085 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:52.085 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.085 [2024-11-17 14:36:41.218920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.085 [2024-11-17 14:36:41.219359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.085 [2024-11-17 14:36:41.219376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.085 [2024-11-17 14:36:41.219384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.085 [2024-11-17 14:36:41.219567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.085 [2024-11-17 14:36:41.219747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.085 [2024-11-17 14:36:41.219762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.085 [2024-11-17 14:36:41.219769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.085 [2024-11-17 14:36:41.219777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.085 [2024-11-17 14:36:41.231965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.085 [2024-11-17 14:36:41.232396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.085 [2024-11-17 14:36:41.232414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.085 [2024-11-17 14:36:41.232421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.085 [2024-11-17 14:36:41.232600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.085 [2024-11-17 14:36:41.232779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.085 [2024-11-17 14:36:41.232789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.085 [2024-11-17 14:36:41.232796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.085 [2024-11-17 14:36:41.232803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.085 [2024-11-17 14:36:41.245036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.085 [2024-11-17 14:36:41.245447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.085 [2024-11-17 14:36:41.245465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.085 [2024-11-17 14:36:41.245474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.085 [2024-11-17 14:36:41.245660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.085 [2024-11-17 14:36:41.245834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.085 [2024-11-17 14:36:41.245843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.085 [2024-11-17 14:36:41.245850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.085 [2024-11-17 14:36:41.245857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.085 [2024-11-17 14:36:41.257329] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:26:52.085 [2024-11-17 14:36:41.257374] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:52.085 [2024-11-17 14:36:41.258173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.085 [2024-11-17 14:36:41.258602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.085 [2024-11-17 14:36:41.258621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.085 [2024-11-17 14:36:41.258629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.085 [2024-11-17 14:36:41.258802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.085 [2024-11-17 14:36:41.258980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.085 [2024-11-17 14:36:41.258989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.085 [2024-11-17 14:36:41.258997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.085 [2024-11-17 14:36:41.259004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.085 [2024-11-17 14:36:41.271221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.085 [2024-11-17 14:36:41.271588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.085 [2024-11-17 14:36:41.271608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.085 [2024-11-17 14:36:41.271616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.085 [2024-11-17 14:36:41.271790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.085 [2024-11-17 14:36:41.271963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.085 [2024-11-17 14:36:41.271972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.085 [2024-11-17 14:36:41.271979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.085 [2024-11-17 14:36:41.271985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.085 [2024-11-17 14:36:41.284335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.085 [2024-11-17 14:36:41.284763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.085 [2024-11-17 14:36:41.284780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.085 [2024-11-17 14:36:41.284788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.085 [2024-11-17 14:36:41.284962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.085 [2024-11-17 14:36:41.285135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.085 [2024-11-17 14:36:41.285145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.085 [2024-11-17 14:36:41.285152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.085 [2024-11-17 14:36:41.285159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.085 [2024-11-17 14:36:41.297541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.085 [2024-11-17 14:36:41.297886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.085 [2024-11-17 14:36:41.297903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.085 [2024-11-17 14:36:41.297911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.085 [2024-11-17 14:36:41.298088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.085 [2024-11-17 14:36:41.298266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.085 [2024-11-17 14:36:41.298276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.085 [2024-11-17 14:36:41.298283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.085 [2024-11-17 14:36:41.298295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.346 [2024-11-17 14:36:41.310665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.346 [2024-11-17 14:36:41.311025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.346 [2024-11-17 14:36:41.311042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.346 [2024-11-17 14:36:41.311050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.346 [2024-11-17 14:36:41.311222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.346 [2024-11-17 14:36:41.311401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.346 [2024-11-17 14:36:41.311411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.346 [2024-11-17 14:36:41.311418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.346 [2024-11-17 14:36:41.311425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.346 [2024-11-17 14:36:41.323658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.346 [2024-11-17 14:36:41.324083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.346 [2024-11-17 14:36:41.324101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.346 [2024-11-17 14:36:41.324108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.347 [2024-11-17 14:36:41.324282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.347 [2024-11-17 14:36:41.324460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.347 [2024-11-17 14:36:41.324470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.347 [2024-11-17 14:36:41.324477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.347 [2024-11-17 14:36:41.324484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.347 [2024-11-17 14:36:41.336630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.347 [2024-11-17 14:36:41.337055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.347 [2024-11-17 14:36:41.337072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.347 [2024-11-17 14:36:41.337080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.347 [2024-11-17 14:36:41.337253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.347 [2024-11-17 14:36:41.337430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.347 [2024-11-17 14:36:41.337440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.347 [2024-11-17 14:36:41.337447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.347 [2024-11-17 14:36:41.337454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.347 [2024-11-17 14:36:41.338822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:52.347 [2024-11-17 14:36:41.349715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.347 [2024-11-17 14:36:41.350181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.347 [2024-11-17 14:36:41.350202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.347 [2024-11-17 14:36:41.350212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.347 [2024-11-17 14:36:41.350392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.347 [2024-11-17 14:36:41.350568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.347 [2024-11-17 14:36:41.350578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.347 [2024-11-17 14:36:41.350586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.347 [2024-11-17 14:36:41.350593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.347 [2024-11-17 14:36:41.362808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.347 [2024-11-17 14:36:41.363233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.347 [2024-11-17 14:36:41.363251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.347 [2024-11-17 14:36:41.363260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.347 [2024-11-17 14:36:41.363440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.347 [2024-11-17 14:36:41.363614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.347 [2024-11-17 14:36:41.363624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.347 [2024-11-17 14:36:41.363631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.347 [2024-11-17 14:36:41.363638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.347 [2024-11-17 14:36:41.375792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.347 [2024-11-17 14:36:41.376219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.347 [2024-11-17 14:36:41.376237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.347 [2024-11-17 14:36:41.376245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.347 [2024-11-17 14:36:41.376424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.347 [2024-11-17 14:36:41.376599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.347 [2024-11-17 14:36:41.376608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.347 [2024-11-17 14:36:41.376615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.347 [2024-11-17 14:36:41.376622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.347 [2024-11-17 14:36:41.381050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:52.347 [2024-11-17 14:36:41.381076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:52.347 [2024-11-17 14:36:41.381083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:52.347 [2024-11-17 14:36:41.381093] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:52.347 [2024-11-17 14:36:41.381098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:52.347 [2024-11-17 14:36:41.382488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:52.347 [2024-11-17 14:36:41.382600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.347 [2024-11-17 14:36:41.382601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:52.347 [2024-11-17 14:36:41.388992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.347 [2024-11-17 14:36:41.389446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.347 [2024-11-17 14:36:41.389467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.347 [2024-11-17 14:36:41.389477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.347 [2024-11-17 14:36:41.389664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.347 [2024-11-17 14:36:41.389841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.347 [2024-11-17 14:36:41.389851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.347 [2024-11-17 14:36:41.389859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.347 [2024-11-17 14:36:41.389868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.347 [2024-11-17 14:36:41.402090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.347 [2024-11-17 14:36:41.402553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.347 [2024-11-17 14:36:41.402577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.347 [2024-11-17 14:36:41.402586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.347 [2024-11-17 14:36:41.402766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.347 [2024-11-17 14:36:41.402947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.347 [2024-11-17 14:36:41.402957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.347 [2024-11-17 14:36:41.402966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.347 [2024-11-17 14:36:41.402973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.347 [2024-11-17 14:36:41.415188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.347 [2024-11-17 14:36:41.415656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.347 [2024-11-17 14:36:41.415677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.347 [2024-11-17 14:36:41.415688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.347 [2024-11-17 14:36:41.415868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.347 [2024-11-17 14:36:41.416049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.347 [2024-11-17 14:36:41.416059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.347 [2024-11-17 14:36:41.416073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.347 [2024-11-17 14:36:41.416081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.347 [2024-11-17 14:36:41.428287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.347 [2024-11-17 14:36:41.428751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.347 [2024-11-17 14:36:41.428772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.347 [2024-11-17 14:36:41.428782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.347 [2024-11-17 14:36:41.428963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.347 [2024-11-17 14:36:41.429141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.347 [2024-11-17 14:36:41.429151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.347 [2024-11-17 14:36:41.429160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.347 [2024-11-17 14:36:41.429168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.347 [2024-11-17 14:36:41.441370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.347 [2024-11-17 14:36:41.441754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.348 [2024-11-17 14:36:41.441775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.348 [2024-11-17 14:36:41.441784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.348 [2024-11-17 14:36:41.441964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.348 [2024-11-17 14:36:41.442144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.348 [2024-11-17 14:36:41.442155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.348 [2024-11-17 14:36:41.442163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.348 [2024-11-17 14:36:41.442171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.348 [2024-11-17 14:36:41.454543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.348 [2024-11-17 14:36:41.454982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.348 [2024-11-17 14:36:41.455000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.348 [2024-11-17 14:36:41.455008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.348 [2024-11-17 14:36:41.455186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.348 [2024-11-17 14:36:41.455371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.348 [2024-11-17 14:36:41.455381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.348 [2024-11-17 14:36:41.455388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.348 [2024-11-17 14:36:41.455395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.348 [2024-11-17 14:36:41.467602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.348 [2024-11-17 14:36:41.467929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.348 [2024-11-17 14:36:41.467947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.348 [2024-11-17 14:36:41.467955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.348 [2024-11-17 14:36:41.468133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.348 [2024-11-17 14:36:41.468312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.348 [2024-11-17 14:36:41.468321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.348 [2024-11-17 14:36:41.468328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.348 [2024-11-17 14:36:41.468335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:52.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:52.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:52.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:52.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.348 [2024-11-17 14:36:41.480699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.348 [2024-11-17 14:36:41.481042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.348 [2024-11-17 14:36:41.481061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.348 [2024-11-17 14:36:41.481069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.348 [2024-11-17 14:36:41.481247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.348 [2024-11-17 14:36:41.481432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.348 [2024-11-17 14:36:41.481443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.348 [2024-11-17 14:36:41.481450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.348 [2024-11-17 14:36:41.481458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.348 [2024-11-17 14:36:41.493831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.348 [2024-11-17 14:36:41.494178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.348 [2024-11-17 14:36:41.494197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.348 [2024-11-17 14:36:41.494207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.348 [2024-11-17 14:36:41.494391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.348 [2024-11-17 14:36:41.494570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.348 [2024-11-17 14:36:41.494582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.348 [2024-11-17 14:36:41.494590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.348 [2024-11-17 14:36:41.494597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.348 [2024-11-17 14:36:41.506985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.348 [2024-11-17 14:36:41.507282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.348 [2024-11-17 14:36:41.507300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.348 [2024-11-17 14:36:41.507309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.348 [2024-11-17 14:36:41.507493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.348 [2024-11-17 14:36:41.507672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.348 [2024-11-17 14:36:41.507682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.348 [2024-11-17 14:36:41.507688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.348 [2024-11-17 14:36:41.507695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:52.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:52.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.348 [2024-11-17 14:36:41.517911] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.348 [2024-11-17 14:36:41.520075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.348 [2024-11-17 14:36:41.520421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.348 [2024-11-17 14:36:41.520439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.348 [2024-11-17 14:36:41.520447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.348 [2024-11-17 14:36:41.520626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.348 [2024-11-17 14:36:41.520805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.348 [2024-11-17 14:36:41.520814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.348 [2024-11-17 14:36:41.520821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.348 [2024-11-17 14:36:41.520827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:52.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.348 4956.17 IOPS, 19.36 MiB/s [2024-11-17T13:36:41.573Z] [2024-11-17 14:36:41.533181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.348 [2024-11-17 14:36:41.533477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.348 [2024-11-17 14:36:41.533495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.348 [2024-11-17 14:36:41.533504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.348 [2024-11-17 14:36:41.533686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.348 [2024-11-17 14:36:41.533865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.348 [2024-11-17 14:36:41.533875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.349 [2024-11-17 14:36:41.533882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.349 [2024-11-17 14:36:41.533888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.349 [2024-11-17 14:36:41.546256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.349 [2024-11-17 14:36:41.546688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.349 [2024-11-17 14:36:41.546706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.349 [2024-11-17 14:36:41.546714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.349 [2024-11-17 14:36:41.546892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.349 [2024-11-17 14:36:41.547072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.349 [2024-11-17 14:36:41.547083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.349 [2024-11-17 14:36:41.547091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.349 [2024-11-17 14:36:41.547098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.349 Malloc0 00:26:52.349 [2024-11-17 14:36:41.559315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.349 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.349 [2024-11-17 14:36:41.559743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.349 [2024-11-17 14:36:41.559762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.349 [2024-11-17 14:36:41.559770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.349 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:52.349 [2024-11-17 14:36:41.559949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.349 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.349 [2024-11-17 14:36:41.560127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.349 [2024-11-17 14:36:41.560138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.349 [2024-11-17 14:36:41.560145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.349 [2024-11-17 14:36:41.560151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.349 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.608 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.608 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:52.608 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.608 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.608 [2024-11-17 14:36:41.572501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.608 [2024-11-17 14:36:41.572864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.608 [2024-11-17 14:36:41.572881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a500 with addr=10.0.0.2, port=4420 00:26:52.608 [2024-11-17 14:36:41.572889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a500 is same with the state(6) to be set 00:26:52.608 [2024-11-17 14:36:41.573067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a500 (9): Bad file descriptor 00:26:52.608 [2024-11-17 14:36:41.573246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.608 [2024-11-17 14:36:41.573255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.608 [2024-11-17 14:36:41.573262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.608 [2024-11-17 14:36:41.573269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.608 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.608 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:52.608 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.608 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.608 [2024-11-17 14:36:41.582557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.608 [2024-11-17 14:36:41.585633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.608 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.608 14:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1616360 00:26:52.608 [2024-11-17 14:36:41.649428] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:54.481 5624.00 IOPS, 21.97 MiB/s [2024-11-17T13:36:44.643Z] 6331.00 IOPS, 24.73 MiB/s [2024-11-17T13:36:45.580Z] 6864.56 IOPS, 26.81 MiB/s [2024-11-17T13:36:46.957Z] 7313.20 IOPS, 28.57 MiB/s [2024-11-17T13:36:47.557Z] 7670.18 IOPS, 29.96 MiB/s [2024-11-17T13:36:48.935Z] 7960.50 IOPS, 31.10 MiB/s [2024-11-17T13:36:49.871Z] 8212.77 IOPS, 32.08 MiB/s [2024-11-17T13:36:50.808Z] 8437.43 IOPS, 32.96 MiB/s 00:27:01.583 Latency(us) 00:27:01.583 [2024-11-17T13:36:50.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.583 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:01.583 Verification LBA range: start 0x0 length 0x4000 00:27:01.583 Nvme1n1 : 15.00 8605.51 33.62 10910.05 0.00 6538.62 448.78 16754.42 00:27:01.583 [2024-11-17T13:36:50.809Z] =================================================================================================================== 00:27:01.584 [2024-11-17T13:36:50.809Z] Total : 8605.51 33.62 10910.05 0.00 6538.62 448.78 16754.42 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:01.584 rmmod nvme_tcp 00:27:01.584 rmmod nvme_fabrics 00:27:01.584 rmmod nvme_keyring 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1617290 ']' 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1617290 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1617290 ']' 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1617290 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:01.584 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1617290 00:27:01.843 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:01.843 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:01.843 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1617290' 00:27:01.843 killing process with pid 1617290 00:27:01.843 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1617290 00:27:01.843 14:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1617290 00:27:01.843 14:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:01.843 14:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:01.843 14:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:01.843 14:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:01.843 14:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:01.843 14:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:01.843 14:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:01.843 14:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:01.843 14:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:01.843 14:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.843 14:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:01.843 14:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.381 14:36:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:04.381 00:27:04.381 real 0m25.922s 00:27:04.381 user 1m0.095s 00:27:04.381 sys 0m6.818s 00:27:04.381 14:36:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:04.381 14:36:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:04.381 ************************************ 00:27:04.381 END TEST nvmf_bdevperf 00:27:04.381 ************************************ 00:27:04.381 14:36:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:04.381 14:36:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:04.381 14:36:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:04.381 14:36:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.381 ************************************ 00:27:04.381 START TEST nvmf_target_disconnect 00:27:04.381 ************************************ 00:27:04.381 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:04.381 * Looking for test storage... 00:27:04.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:04.381 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:04.381 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:27:04.381 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:04.381 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:04.381 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:04.381 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:04.381 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:04.381 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:04.381 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:04.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.382 --rc genhtml_branch_coverage=1 00:27:04.382 --rc genhtml_function_coverage=1 00:27:04.382 --rc genhtml_legend=1 00:27:04.382 --rc geninfo_all_blocks=1 00:27:04.382 --rc geninfo_unexecuted_blocks=1 00:27:04.382 00:27:04.382 ' 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:04.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.382 --rc genhtml_branch_coverage=1 00:27:04.382 --rc genhtml_function_coverage=1 00:27:04.382 --rc genhtml_legend=1 00:27:04.382 --rc geninfo_all_blocks=1 00:27:04.382 --rc geninfo_unexecuted_blocks=1 00:27:04.382 00:27:04.382 ' 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:04.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.382 --rc genhtml_branch_coverage=1 00:27:04.382 --rc genhtml_function_coverage=1 00:27:04.382 --rc genhtml_legend=1 00:27:04.382 --rc geninfo_all_blocks=1 00:27:04.382 --rc geninfo_unexecuted_blocks=1 00:27:04.382 00:27:04.382 ' 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:04.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.382 --rc genhtml_branch_coverage=1 00:27:04.382 --rc genhtml_function_coverage=1 00:27:04.382 --rc genhtml_legend=1 00:27:04.382 --rc geninfo_all_blocks=1 00:27:04.382 --rc geninfo_unexecuted_blocks=1 00:27:04.382 00:27:04.382 ' 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:04.382 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:04.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:04.383 14:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:10.954 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:10.954 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:10.954 Found net devices under 0000:86:00.0: cvl_0_0 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:10.954 Found net devices under 0000:86:00.1: cvl_0_1 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:10.954 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:10.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:27:10.955 00:27:10.955 --- 10.0.0.2 ping statistics --- 00:27:10.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.955 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:27:10.955 00:27:10.955 --- 10.0.0.1 ping statistics --- 00:27:10.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.955 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:10.955 ************************************ 00:27:10.955 START TEST nvmf_target_disconnect_tc1 00:27:10.955 ************************************ 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:10.955 [2024-11-17 14:36:59.482321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.955 [2024-11-17 14:36:59.482387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1dab0 with addr=10.0.0.2, port=4420 00:27:10.955 [2024-11-17 14:36:59.482412] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:10.955 [2024-11-17 14:36:59.482426] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:10.955 [2024-11-17 14:36:59.482434] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:10.955 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:10.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:10.955 Initializing NVMe Controllers 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:10.955 00:27:10.955 real 0m0.121s 00:27:10.955 user 0m0.053s 00:27:10.955 sys 0m0.068s 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:10.955 ************************************ 00:27:10.955 END TEST nvmf_target_disconnect_tc1 00:27:10.955 ************************************ 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:10.955 ************************************ 00:27:10.955 START TEST nvmf_target_disconnect_tc2 00:27:10.955 ************************************ 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1622454 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1622454 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1622454 ']' 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.955 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.956 [2024-11-17 14:36:59.630493] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:27:10.956 [2024-11-17 14:36:59.630541] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.956 [2024-11-17 14:36:59.712983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:10.956 [2024-11-17 14:36:59.755504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.956 [2024-11-17 14:36:59.755541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.956 [2024-11-17 14:36:59.755549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.956 [2024-11-17 14:36:59.755555] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.956 [2024-11-17 14:36:59.755560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.956 [2024-11-17 14:36:59.757076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:10.956 [2024-11-17 14:36:59.757187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:10.956 [2024-11-17 14:36:59.757295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:10.956 [2024-11-17 14:36:59.757296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.956 Malloc0 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.956 [2024-11-17 14:36:59.929725] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.956 [2024-11-17 14:36:59.961973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1622481 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:10.956 14:36:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:12.867 14:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1622454 00:27:12.867 14:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Write completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Write completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Write completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Write completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Write completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Write completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Write completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Write completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Write completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 [2024-11-17 14:37:01.997280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.867 starting I/O failed 00:27:12.867 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Write completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Write completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Write completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Write completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Write completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Write completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Write completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 [2024-11-17 14:37:01.997488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Write completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Write completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Write completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Write completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Write completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Write completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Write completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Write completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Write completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Write completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 [2024-11-17 14:37:01.997693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Write completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Write completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Write completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Write completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 Read completed with error (sct=0, sc=8) 00:27:12.868 starting I/O failed 00:27:12.868 [2024-11-17 14:37:01.997891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.868 [2024-11-17 14:37:01.998177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.868 [2024-11-17 14:37:01.998204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.868 qpair failed and we were unable to recover it. 00:27:12.868 [2024-11-17 14:37:01.998410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:01.998424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:01.998521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:01.998532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:01.998617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:01.998631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:01.998822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:01.998834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:01.998907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:01.998918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:01.999246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:01.999277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:01.999425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:01.999459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:01.999644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:01.999676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:01.999781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:01.999791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:01.999869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:01.999880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:01.999978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.000008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.000258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.000290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.000506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.000539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.000670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.000702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.000912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.000944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.001183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.001195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.001422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.001455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.001603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.001634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.001827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.001859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.002233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.002264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.002444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.002477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.002668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.002706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.002785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.002796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.002948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.002959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.003086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.003118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.003294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.003326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.003515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.003548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.003747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.003779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.003903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.003932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.004092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.004159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.004444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.004481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.004598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.004631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.004880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.004912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.005164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.869 [2024-11-17 14:37:02.005196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.869 qpair failed and we were unable to recover it. 00:27:12.869 [2024-11-17 14:37:02.005381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.005415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.005540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.005573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.005687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.005719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.005912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.005944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.006136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.006169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.006374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.006408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.006584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.006616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.006725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.006758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.006887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.006933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.007164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.007197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.007327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.007371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.007518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.007551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.007673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.007705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.007913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.007945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.008159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.008191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.008314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.008346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.008509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.008543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.008734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.008766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.008904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.008936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.009112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.009144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.009331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.009375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.009507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.009540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.009724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.009757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.009908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.009940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.010135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.010168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.010438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.010471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.010666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.010698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.010868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.010900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.011192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.011224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.011438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.011472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.011594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.011626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.011768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.011800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.011923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.011956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.012143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.012175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.870 qpair failed and we were unable to recover it. 00:27:12.870 [2024-11-17 14:37:02.012400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.870 [2024-11-17 14:37:02.012434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.012615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.012698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.013013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.013065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.013281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.013316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.013522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.013556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.013688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.013722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.013972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.014004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.014202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.014235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.014374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.014408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.014530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.014563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.014695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.014727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.014863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.014893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.015153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.015185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.015372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.015405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.015620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.015652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.017166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.017223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.017456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.017492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.017761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.017794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.017932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.017964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.018155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.018187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.018474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.018507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.018698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.018731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.018859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.018890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.019221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.019253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.019507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.019542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.019739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.019772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.020041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.020073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.020369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.020403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.020591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.020633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.020819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.020851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.021049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.021082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.021336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.021379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.021571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.021603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.022957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.023009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.023252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.023286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.023477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.023512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.023706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.023737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.024017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.871 [2024-11-17 14:37:02.024049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.871 qpair failed and we were unable to recover it. 00:27:12.871 [2024-11-17 14:37:02.024187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.024219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.024443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.024475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.024619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.024651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.024774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.024805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.025131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.025164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.025378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.025413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.027223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.027281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.027494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.027530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.027722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.027756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.028001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.028033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.028275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.028307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.028482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.028516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.028724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.028756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.028953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.028985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.029122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.029154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.029272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.029305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.029466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.029501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.029819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.029859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.030098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.030130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.030320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.030368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.030581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.030615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.030747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.030779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.030919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.030953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.031193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.031225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.031397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.031433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.031574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.031606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.031744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.031777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.032041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.032072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.872 [2024-11-17 14:37:02.032255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.872 [2024-11-17 14:37:02.032288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.872 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.032494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.032528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.032663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.032695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.032840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.032873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.033180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.033213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.033459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.033493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.033616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.033648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.033776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.033809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.034002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.034033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.034270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.034302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.034483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.034518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.034628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.034660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.034878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.034910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.035014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.035046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.035240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.035272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.035491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.035525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.035716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.035754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.035941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.035991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.036184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.036216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.036334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.036378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.036596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.036629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.036824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.036856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.037060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.037093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.037228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.037260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.037506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.037540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.037729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.037761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.038015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.038049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.038310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.038341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.038555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.038590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.038719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.038748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.039024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.039056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.039324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.039368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.039557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.039590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.039711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.039745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.039944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.873 [2024-11-17 14:37:02.039975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.873 qpair failed and we were unable to recover it. 00:27:12.873 [2024-11-17 14:37:02.040258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.040293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.040539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.040573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.040775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.040808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.040993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.041025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.041263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.041295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.041561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.041595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.041779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.041810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.042154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.042186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.042388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.042422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.042603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.042635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.042759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.042792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.043149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.043181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.043385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.043419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.043634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.043666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.043787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.043820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.043941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.043973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.044116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.044149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.044372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.044406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.044541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.044574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.044715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.044746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.044934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.044967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.045158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.045190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.045306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.045340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.045525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.045557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.045690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.045721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.045849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.045883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.046009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.046039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.046282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.046315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.046546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.046580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.046754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.046785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.046998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.047032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.047202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.047234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.047457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.047492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.047752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.047784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-17 14:37:02.047930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.874 [2024-11-17 14:37:02.047961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.048099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.048131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.048314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.048346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.048567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.048600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.048795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.048828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.049050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.049083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.049293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.049326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.049558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.049604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.049728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.049762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.049966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.049998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.050212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.050245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.050493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.050527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.050670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.050701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.050880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.050912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.051244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.051277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.051465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.051505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.051685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.051717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.051903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.051936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.053342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.053410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.053699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.053731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.053875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.053908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.054124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.054156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.054422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.054457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.054599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.054632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.054827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.054859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.055049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.055080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.055271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.055302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.055516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.055550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.055742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.055773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.056030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.056063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.056302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.056335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.056485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.056519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.056763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.056795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.056928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.056961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.057204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.057236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.057454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.057489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.057635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.875 [2024-11-17 14:37:02.057667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-17 14:37:02.057810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.057842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.058070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.058103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.058289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.058322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.058449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.058481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.058724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.058757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.058940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.058975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.059168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.059197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.059314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.059342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.059496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.059526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.059735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.059763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.059919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.059948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.060126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.060156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.060369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.060400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.060544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.060573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.060704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.060732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.060990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.061020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.061289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.061318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.061543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.061574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.061693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.061722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.061918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.061948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.062186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.062215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.062484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.062517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.062668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.062698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.062844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.062873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.063094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.063124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.063320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.063363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.063578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.063608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.063802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.063833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.063970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.064003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.064183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.064214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.064454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.064487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.064726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.064756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.064881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.064912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.065109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.065142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.065275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.065306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.065504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.065537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.065671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.065700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-17 14:37:02.066434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-17 14:37:02.066489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.066801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.066841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.067111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.067145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.067441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.067478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.067758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.067793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.067943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.067976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.068213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.068247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.068437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.068473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.068672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.068706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.069024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.069099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.069371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.069413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.069612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.069645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.069824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.069858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.070073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.070106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.070225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.070258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.070579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.070615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.070839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.070872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.071069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.071102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.071350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.071394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.071669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.071702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.071960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.071992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.072235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.072267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.072452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.072496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.072692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.072726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.072881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.072913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.073067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.073100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.073340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.073383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.073580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.073611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.073763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.073796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.074021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-17 14:37:02.074053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-17 14:37:02.074257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.074289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.074474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.074507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.074776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.074810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.075032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.075065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.075272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.075305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.075524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.075559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.075749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.075783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.075911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.075944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.076067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.076100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.076278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.076310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.077774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.077830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.078127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.078163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.078293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.078327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.078553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.078587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.078730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.078764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.078888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.078920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.079186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.079219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.079423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.079458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.079676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.079709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.080033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.080110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.080312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.080350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.080557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.080592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.080783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.080817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.081084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.081117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.081318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.081367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.081569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.081603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.081791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.081824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.081950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.081982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-17 14:37:02.082234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-17 14:37:02.082266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:13.155 [2024-11-17 14:37:02.082473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.155 [2024-11-17 14:37:02.082508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.155 qpair failed and we were unable to recover it. 00:27:13.155 [2024-11-17 14:37:02.082772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.155 [2024-11-17 14:37:02.082805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.155 qpair failed and we were unable to recover it. 00:27:13.155 [2024-11-17 14:37:02.083042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.155 [2024-11-17 14:37:02.083074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.155 qpair failed and we were unable to recover it. 00:27:13.155 [2024-11-17 14:37:02.083276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.155 [2024-11-17 14:37:02.083310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.155 qpair failed and we were unable to recover it. 00:27:13.155 [2024-11-17 14:37:02.083595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.155 [2024-11-17 14:37:02.083669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.155 qpair failed and we were unable to recover it. 00:27:13.155 [2024-11-17 14:37:02.083901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.155 [2024-11-17 14:37:02.083939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:13.155 qpair failed and we were unable to recover it. 00:27:13.155 [2024-11-17 14:37:02.084191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.155 [2024-11-17 14:37:02.084224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:13.155 qpair failed and we were unable to recover it. 00:27:13.155 [2024-11-17 14:37:02.084491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.155 [2024-11-17 14:37:02.084530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:13.155 qpair failed and we were unable to recover it. 00:27:13.155 [2024-11-17 14:37:02.084785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.155 [2024-11-17 14:37:02.084818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:13.155 qpair failed and we were unable to recover it. 00:27:13.155 [2024-11-17 14:37:02.085012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.155 [2024-11-17 14:37:02.085046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.085235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.085267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.085395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.085431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.085569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.085601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.085861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.085894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.086024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.086058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.086249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.086282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.086522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.086557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.086731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.086773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.086985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.087019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.087330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.087372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.087510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.087542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.087691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.087724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.087918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.087951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.088072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.088106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.088293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.088325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.088495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.088530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.088677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.088710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.088996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.089028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.089163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.089197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.089379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.089413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.089654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.089698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.089826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.089859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.090149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.090182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.090369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.090404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.090525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.090558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.090748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.090781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.091009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.091042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.091287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.091321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.091506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.091540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.091762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.091795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.091926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.091960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.092099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.092131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.092260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.092295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.092592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.092627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-17 14:37:02.092775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.156 [2024-11-17 14:37:02.092808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.092936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.092970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.093172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.093204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.093390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.093424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.093537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.093570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.093804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.093838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.095312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.095379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.095564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.095596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.095794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.095827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.096077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.096110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.096337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.096386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.096513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.096546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.096746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.096778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.097147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.097223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.097416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.097455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.097660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.097696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.097825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.097859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.098010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.098044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.098170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.098204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.098390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.098425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.098624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.098658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.098911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.098944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.099239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.099272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.099497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.099531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.099776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.099810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.100029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.100061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.100350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.100408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.100588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.100621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.100772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.100806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.101146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.101179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.101375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.101411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.101719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.101751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.102002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.102035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.102338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.102381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-17 14:37:02.102581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.157 [2024-11-17 14:37:02.102614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.102758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.102795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.103123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.103156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.103364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.103398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.103518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.103551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.103683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.103727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.103958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.104002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.104214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.104262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.105644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.105705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.105953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.105988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.106267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.106301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.106571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.106605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.106737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.106770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.106958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.106990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.107261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.107312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.107500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.107535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.107677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.107711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.107989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.108022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.108208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.108240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.108402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.108438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.108646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.108679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.108876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.108909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.109060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.109095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.109379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.109413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.109631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.109664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.109813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.109846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.109965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.110017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.110217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.110248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.110540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.110575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.110883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.110916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.111064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.111099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.111224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.111257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.111458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.111499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.111639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.111671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.111890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.111925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.158 qpair failed and we were unable to recover it. 00:27:13.158 [2024-11-17 14:37:02.112051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.158 [2024-11-17 14:37:02.112086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.112385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.112421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.112628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.112661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.112795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.112829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.113020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.113053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.113188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.113221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.113412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.113446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.113721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.113756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.113943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.113977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.114197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.114229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.115716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.115772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.115958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.115991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.116270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.116303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.116595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.116630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.116823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.116858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.117053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.117088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.117375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.117411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.117618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.117651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.117831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.117865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.118064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.118099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.118294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.118327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.118474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.118512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.118693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.118727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.119001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.119035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.119236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.119272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.119532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.119568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.119720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.119754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.119919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.119955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.120165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.120199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.120410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.120447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.120571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.120606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.120823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.120857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.121058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.121091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.121290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.121326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.121455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.121488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.121706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.121742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.121898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.159 [2024-11-17 14:37:02.121931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.159 qpair failed and we were unable to recover it. 00:27:13.159 [2024-11-17 14:37:02.122133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.122169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.122383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.122419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.122561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.122594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.122846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.122880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.123037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.123072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.123378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.123413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.123546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.123580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.123720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.123755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.123959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.123993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.124220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.124253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.124445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.124481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.124674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.124706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.124859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.124895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.125109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.125143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.125332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.125377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.125514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.125546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.125659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.125692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.125888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.125920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.126069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.126103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.126322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.126364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.126520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.126553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.126780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.126813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.127006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.127039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.127174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.127207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.127365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.127399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.127556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.127589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.127722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.127754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.128018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.128052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.128233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.160 [2024-11-17 14:37:02.128267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.160 qpair failed and we were unable to recover it. 00:27:13.160 [2024-11-17 14:37:02.128463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.128498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.128705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.128738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.128887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.128922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.129126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.129159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.129283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.129316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.129440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.129474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.129669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.129703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.129962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.129995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.130178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.130213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.130395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.130430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.130662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.130695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.130899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.130939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.131126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.131160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.131375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.131410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.131604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.131637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.131824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.131856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.132053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.132086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.132394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.132429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.132628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.132662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.132868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.132901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.133186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.133220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.133509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.133545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.133680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.133713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.133870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.133902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.134097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.134130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.134331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.134376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.134572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.134607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.134791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.134824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.135024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.135059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.135185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.135218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.135483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.135518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.135721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.135755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.135901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.135934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.136051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.161 [2024-11-17 14:37:02.136084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.161 qpair failed and we were unable to recover it. 00:27:13.161 [2024-11-17 14:37:02.136334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.136378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.136635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.136669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.136823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.136855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.137105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.137140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.137410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.137445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.137640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.137673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.137886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.137919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.138178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.138212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.138390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.138425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.138564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.138597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.138792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.138827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.139055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.139088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.139281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.139315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.139516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.139549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.139741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.139777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.139993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.140027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.140230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.140264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.140482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.140518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.140709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.140743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.140919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.140954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.141205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.141237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.141502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.141537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.141739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.141773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.142001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.142033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.142286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.142320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.142485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.142520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.142722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.142755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.142956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.142991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.143199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.143232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.143500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.143535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.143740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.143773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.144029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.144062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.144188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.144220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.144448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.144483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.144621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.162 [2024-11-17 14:37:02.144654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.162 qpair failed and we were unable to recover it. 00:27:13.162 [2024-11-17 14:37:02.144859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.144892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.145083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.145117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.145403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.145439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.145648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.145682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.145808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.145842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.145952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.145984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.146245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.146279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.146430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.146464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.146605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.146639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.146872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.146911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.147096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.147131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.147403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.147438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.147559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.147592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.147868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.147902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.148139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.148172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.148327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.148391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.148600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.148634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.148816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.148850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.149045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.149078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.149277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.149311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.149624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.149659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.149937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.149971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.150247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.150280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.150564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.150601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.150948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.150982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.151220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.151254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.151514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.151549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.151754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.151787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.151990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.152024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.152225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.152259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.152442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.152478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.152681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.152715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.152991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.153024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.153144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.153177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.163 [2024-11-17 14:37:02.153405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.163 [2024-11-17 14:37:02.153441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.163 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.153726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.153759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.153955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.153989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.154242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.154275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.154476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.154513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.154813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.154847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.155079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.155113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.155247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.155281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.155501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.155536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.155738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.155771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.156036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.156070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.156346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.156402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.156521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.156554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.156769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.156803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.157070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.157104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.157394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.157435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.157550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.157584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.157712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.157744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.157871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.157905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.158179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.158212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.158409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.158444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.158604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.158640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.158823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.158855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.158983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.159015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.159167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.159200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.159419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.159455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.159703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.159737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.160020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.160054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.160244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.160277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.160493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.160529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.160753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.160787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.160992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.161026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.161271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.161305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.161500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.161535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.161726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.161759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.161942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.161976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.162113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.162146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.162416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.162451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.164 qpair failed and we were unable to recover it. 00:27:13.164 [2024-11-17 14:37:02.162596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.164 [2024-11-17 14:37:02.162628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.162815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.162849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.162977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.163010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.163258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.163291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.163587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.163622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.163756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.163790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.163994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.164028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.164227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.164260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.164463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.164497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.164767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.164801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.164956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.164988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.165197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.165229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.165397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.165432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.165588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.165623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.165930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.165963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.166154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.166187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.166415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.166451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.166587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.166626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.166828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.166862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.167208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.167242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.167436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.167471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.167671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.167705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.167893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.167927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.168121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.168154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.168346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.168391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.168623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.168657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.168939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.168972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.169179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.169213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.169425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.169460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.169661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.169695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.169830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.169862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.170181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.170214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.170496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.165 [2024-11-17 14:37:02.170531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.165 qpair failed and we were unable to recover it. 00:27:13.165 [2024-11-17 14:37:02.170730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.170763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.171029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.171062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.171262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.171295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.171558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.171594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.171776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.171809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.171992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.172027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.172241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.172275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.172548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.172583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.172861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.172895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.173032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.173065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.173244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.173276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.173423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.173456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.173734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.173768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.174007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.174040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.174245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.174278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.174473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.174507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.174760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.174794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.174991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.175024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.175309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.175342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.175560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.175595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.175775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.175809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.176142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.176175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.176315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.176348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.176611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.176645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.176837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.176877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.177074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.177106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.177289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.177323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.177576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.177610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.177759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.177792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.178090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.178124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.178419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.178454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.178641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.178675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.178815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.178849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.178978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.179010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.179286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.179318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.179582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.179617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.179887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.179920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.166 [2024-11-17 14:37:02.180137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.166 [2024-11-17 14:37:02.180171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.166 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.180315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.180348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.180617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.180651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.180836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.180868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.181071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.181104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.181286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.181319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.181524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.181559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.181701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.181734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.182051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.182084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.182384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.182419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.182673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.182708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.182907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.182940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.183069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.183103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.183307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.183341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.183482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.183515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.183627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.183658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.183881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.183916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.184111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.184143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.184341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.184388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.184583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.184617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.184757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.184790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.184984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.185018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.185240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.185275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.185461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.185497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.185643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.185676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.185827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.185860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.186043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.186077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.186377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.186418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.186607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.186640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.186894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.186926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.187239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.187273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.187463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.187499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.187718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.187751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.187946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.187980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.188287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.188320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.188566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.188601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.188722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.188755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.188887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.188919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.167 [2024-11-17 14:37:02.189119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.167 [2024-11-17 14:37:02.189153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.167 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.189435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.189471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.189591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.189625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.189754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.189788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.189940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.189972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.190182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.190217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.190395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.190430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.190621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.190655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.190886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.190920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.191119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.191152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.191412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.191447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.191714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.191748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.191954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.191987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.192266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.192299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.192520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.192554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.192739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.192772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.193080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.193113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.193306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.193339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.193515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.193549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.193819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.193852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.193989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.194023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.194224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.194257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.194528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.194564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.194841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.194874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.195166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.195200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.195502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.195538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.195745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.195778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.196003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.196037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.196218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.196252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.196462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.196503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.196687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.196720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.196875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.196910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.197120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.197152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.197443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.197478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.197610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.197642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.197794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.197827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.198127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.198161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.198400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.198436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.168 [2024-11-17 14:37:02.198693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.168 [2024-11-17 14:37:02.198727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.168 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.198906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.198940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.199146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.199181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.199446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.199482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.199670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.199704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.199915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.199949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.200178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.200211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.200463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.200499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.200752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.200785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.201083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.201117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.201386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.201420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.201647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.201681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.201835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.201867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.201999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.202032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.202222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.202256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.202507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.202541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.202735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.202770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.202963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.202995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.203205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.203238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.203498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.203532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.203678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.203711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.203934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.203967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.204173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.204206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.204479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.204514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.204640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.204672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.204949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.204982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.205249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.205283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.205601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.205636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.205874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.205907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.206121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.206155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.206431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.206466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.206692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.206730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.206874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.206908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.207134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.207168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.207293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.207326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.207472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.207509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.207737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.207768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.207967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.169 [2024-11-17 14:37:02.208000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.169 qpair failed and we were unable to recover it. 00:27:13.169 [2024-11-17 14:37:02.208204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.208237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.208459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.208493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.208679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.208713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.209018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.209051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.209325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.209386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.209610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.209645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.209797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.209830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.210091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.210126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.210401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.210435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.210580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.210614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.210813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.210846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.210994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.211027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.211296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.211329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.211547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.211581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.211835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.211867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.212161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.212194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.212492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.212526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.212649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.212681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.212872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.212904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.213200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.213234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.213511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.213547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.213696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.213729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.213925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.213958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.214187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.214220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.214467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.214502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.214635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.214668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.214880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.214911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.215227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.215260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.215404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.215438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.215628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.215660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.215814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.215846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.216200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.216233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.216518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.216552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.216667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.170 [2024-11-17 14:37:02.216706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.170 qpair failed and we were unable to recover it. 00:27:13.170 [2024-11-17 14:37:02.216995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.217028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.217293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.217326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.217529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.217563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.217717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.217750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.218007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.218040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.218235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.218267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.218449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.218484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.218678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.218711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.218866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.218898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.219123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.219156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.219376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.219410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.219541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.219575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.219793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.219827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.220110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.220144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.220414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.220449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.220692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.220724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.220875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.220909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.221116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.221149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.221431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.221466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.221676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.221710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.221909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.221943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.222167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.222201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.222509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.222544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.222730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.222764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.222910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.222943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.223134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.223167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.223471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.223506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.223719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.223753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.224018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.224052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.224301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.224333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.224479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.224513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.224653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.224686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.224871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.224904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.225217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.225250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.225445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.225480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.225727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.225761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.226016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.226048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.171 [2024-11-17 14:37:02.226246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.171 [2024-11-17 14:37:02.226280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.171 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.226506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.226542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.226734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.226773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.226957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.226990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.227124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.227157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.227442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.227476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.227594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.227627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.227816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.227850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.228113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.228146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.228453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.228488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.228683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.228717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.228995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.229028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.229308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.229341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.229497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.229533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.229759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.229793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.229985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.230019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.230223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.230257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.230518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.230555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.230740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.230773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.230918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.230951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.231199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.231234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.231468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.231502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.231705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.231739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.231947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.231981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.232221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.232254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.232508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.232543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.232736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.232770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.233019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.233053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.233244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.233277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.233420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.233457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.233664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.233696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.233948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.233982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.234112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.234146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.234349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.234395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.234674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.234707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.234828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.234861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.234988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.235022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.235207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.235240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.172 qpair failed and we were unable to recover it. 00:27:13.172 [2024-11-17 14:37:02.235455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.172 [2024-11-17 14:37:02.235491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.235642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.235677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.235873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.235906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.236020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.236053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.236325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.236376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.236675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.236709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.236942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.236976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.237170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.237203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.237486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.237520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.237726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.237759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.237904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.237938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.238073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.238106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.238252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.238286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.238481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.238515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.238659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.238692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.238916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.238950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.239241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.239275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.239529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.239565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.239767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.239801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.239946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.239979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.240282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.240316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.240457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.240492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.240694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.240727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.240928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.240963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.241261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.241295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.241573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.241609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.241804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.241838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.242095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.242129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.242315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.242349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.242546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.242580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.242772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.242806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.243030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.243064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.243394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.243429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.243545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.243576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.243738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.243771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.243947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.243987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.244269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.244302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.244542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.173 [2024-11-17 14:37:02.244578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.173 qpair failed and we were unable to recover it. 00:27:13.173 [2024-11-17 14:37:02.244805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.244840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.245159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.245192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.245316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.245350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.245621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.245654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.245866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.245899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.246032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.246066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.246269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.246309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.246526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.246561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.246711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.246743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.247067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.247100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.247305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.247338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.247559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.247592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.247741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.247777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.247918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.247952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.248171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.248203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.248421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.248458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.248648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.248682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.248869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.248903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.249101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.249135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.249263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.249297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.249461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.249496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.249637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.249671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.249824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.249859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.249992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.250026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.250229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.250264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.250507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.250543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.250819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.250854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.251055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.251089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.251378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.251414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.251608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.251642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.251779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.251814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.251943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.251977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.252250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.174 [2024-11-17 14:37:02.252284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.174 qpair failed and we were unable to recover it. 00:27:13.174 [2024-11-17 14:37:02.253861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.253921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.254163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.254198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.254397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.254437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.254600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.254633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.254816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.254851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.255142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.255176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.255431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.255467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.255670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.255704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.255858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.255891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.256214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.256250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.256458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.256495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.256751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.256786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.256932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.256966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.257219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.257260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.257393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.257428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.257625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.257658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.257814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.257848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.258061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.258095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.258293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.258326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.258469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.258505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.258708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.258742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.258884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.258917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.259053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.259087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.259274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.259309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.259521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.259557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.259836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.259871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.260099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.260133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.260276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.260310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.260468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.260503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.260628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.260663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.260822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.260856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.261077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.261112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.261322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.261369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.261630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.261709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.262049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.262088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.262299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.262336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.262536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.262571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.175 [2024-11-17 14:37:02.262821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.175 [2024-11-17 14:37:02.262855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.175 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.263086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.263121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.263278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.263312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.263528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.263567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.263720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.263754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.263891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.263926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.264131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.264167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.264345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.264392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.264582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.264617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.264899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.264935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.265127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.265161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.265304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.265339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.265577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.265611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.265865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.265899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.266079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.266113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.266321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.266366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.266561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.266593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.266737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.266771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.266986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.267019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.267237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.267277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.267468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.267514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.267674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.267709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.267847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.267881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.268098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.268133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.268414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.268449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.268578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.268612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.268810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.268850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.269071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.269105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.269373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.269409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.269548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.269583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.269724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.269759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.269901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.269936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.270152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.270190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.270384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.270420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.270620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.270656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.270807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.270844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.271112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.271149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.271378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.271417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.176 [2024-11-17 14:37:02.271558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.176 [2024-11-17 14:37:02.271594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.176 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.271804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.271840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.272100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.272138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.272325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.272371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.272560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.272594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.272721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.272764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.272923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.272967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.273110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.273145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.273435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.273469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.273692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.273727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.273878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.273913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.274055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.274089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.274301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.274335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.274482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.274520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.274711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.274746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.274877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.274913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.275018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.275053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.275240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.275273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.275411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.275446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.275575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.275610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.275804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.275845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.275997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.276030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.276230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.276265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.276454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.276496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.276707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.276741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.276942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.276976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.277233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.277267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.277399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.277435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.277550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.277584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.277772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.277808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.278010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.278044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.278180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.278215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.278380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.278416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.280023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.280083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.280397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.280435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.280649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.280685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.280872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.280905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.281113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.281146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.177 [2024-11-17 14:37:02.281260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.177 [2024-11-17 14:37:02.281296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.177 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.281437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.281474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.281723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.281760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.281889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.281924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.282211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.282246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.282382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.282419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.282677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.282712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.282836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.282878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.283144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.283178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.283377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.283412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.283548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.283582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.283834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.283869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.283993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.284026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.284314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.284348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.284503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.284536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.284682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.284715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.284844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.284879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.285014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.285048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.285238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.285274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.285410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.285446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.285572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.285605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.285860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.285894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.286027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.286061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.286249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.286282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.286485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.286521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.286828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.286861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.286992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.287026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.287292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.287325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.287581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.287618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.287810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.287844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.288102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.288136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.288270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.288303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.288436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.288472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.288774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.288807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.289092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.289127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.289251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.289286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.289423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.289458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.289675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.289709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.178 [2024-11-17 14:37:02.289916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.178 [2024-11-17 14:37:02.289950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.178 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.290150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.290185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.290381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.290416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.290541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.290574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.290694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.290727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.290859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.290894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.291091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.291124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.291261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.291294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.291425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.291459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.291597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.291638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.291775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.291811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.291991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.292025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.292133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.292167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.292315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.292349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.292645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.292680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.292894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.292928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.293052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.293086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.293228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.293263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.293401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.293438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.293620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.293654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.293789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.293823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.294028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.294061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.294193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.294229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.294438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.294474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.294690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.294724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.294881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.294914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.295037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.295072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.295205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.295238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.295364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.295398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.295530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.295565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.295758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.295791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.295909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.295943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.296059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.296093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.296231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.296265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.179 qpair failed and we were unable to recover it. 00:27:13.179 [2024-11-17 14:37:02.296463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.179 [2024-11-17 14:37:02.296499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.296656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.296692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.296824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.296860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.296992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.297027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.298558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.298618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.298857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.298893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.299087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.299120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.299302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.299338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.299598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.299636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.299784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.299820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.299966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.300001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.300137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.300171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.300307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.300341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.300472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.300507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.300628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.300663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.300787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.300830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.301050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.301085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.303026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.303090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.303312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.303348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.303584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.303620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.305140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.305199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.305514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.305550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.307045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.307098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.307433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.307469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.307618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.307651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.307794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.307826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.308005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.308039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.308158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.308191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.308439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.308474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.308677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.308712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.308870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.308897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.309064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.309091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.309280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.309315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.309618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.309654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.309846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.309880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.310042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.310077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.310207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.310241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.310379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.310414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.180 qpair failed and we were unable to recover it. 00:27:13.180 [2024-11-17 14:37:02.310563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.180 [2024-11-17 14:37:02.310591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.310720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.310746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.310911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.310938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.311105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.311131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.311424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.311454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.311577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.311603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.311773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.311800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.311922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.311949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.312121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.312147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.312252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.312278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.312399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.312427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.312600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.312627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.312803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.312829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.313027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.313052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.313222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.313249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.313375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.313403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.313940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.313975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.314273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.314304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.314452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.314479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.314717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.314744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.314866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.314892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.314999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.315026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.315343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.315380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.315586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.315612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.315732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.315759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.315937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.315965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.316217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.316249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.316438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.316473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.316608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.316642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.316801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.316826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.316945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.316969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.317233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.317261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.317444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.317473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.317648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.317674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.317790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.317818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.318058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.318091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.318344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.318389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.318588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.318621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.181 qpair failed and we were unable to recover it. 00:27:13.181 [2024-11-17 14:37:02.318823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.181 [2024-11-17 14:37:02.318856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.319069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.319103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.319381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.319418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.319570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.319603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.319738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.319771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.321350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.321421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.321599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.321633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.321827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.321862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.322078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.322110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.322317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.322368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.322515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.322549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.322732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.322765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.322959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.322994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.323180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.323213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.323430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.323466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.323652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.323686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.323836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.323871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.324170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.324204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.324461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.324496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.324628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.324667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.324858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.324892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.325074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.325111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.325369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.325403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.325588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.325622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.325752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.325785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.326019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.326053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.326261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.326295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.326485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.326520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.326659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.326692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.326896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.326931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.327113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.327147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.327367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.327402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.327542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.327576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.327776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.327812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.327939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.327973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.328176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.328210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.328455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.328491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.328637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.182 [2024-11-17 14:37:02.328671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.182 qpair failed and we were unable to recover it. 00:27:13.182 [2024-11-17 14:37:02.328892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.328930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.329155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.329189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.329385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.329420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.329555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.329589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.329695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.329729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.329874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.329907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.330178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.330212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.330361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.330397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.330539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.330575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.330782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.330816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.331143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.331177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.331409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.331445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.331659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.331693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.331893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.331928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.332049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.332082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.332273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.332307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.332502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.332536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.332730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.332765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.332961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.332995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.333225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.333259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.333492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.333528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.334398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.334465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.334749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.334790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.335033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.335072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.335267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.335307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.335464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.335500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.336278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.336325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.336576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.336614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.336821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.336854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.337113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.337147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.337405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.337442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.337645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.337680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.337831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.183 [2024-11-17 14:37:02.337866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.183 qpair failed and we were unable to recover it. 00:27:13.183 [2024-11-17 14:37:02.338082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.338115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.338247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.338280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.338564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.338599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.338807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.338841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.339079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.339112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.339319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.339364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.339643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.339679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.339858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.339891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.340202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.340236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.340379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.340414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.340566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.340599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.340788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.340821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.341169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.341204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.341368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.341403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.341572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.341606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.341742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.341777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.342054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.342087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.342305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.342337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.342473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.342506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.342770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.342804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.343015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.343049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.343188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.343221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.343473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.343507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.343715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.343748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.343879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.343910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.344118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.344152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.344421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.344457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.344587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.344620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.344816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.344855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.345148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.345181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.345402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.345438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.345651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.345685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.345877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.345911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.346277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.346310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.346452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.346489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.346743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.346777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.346910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.346944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.184 [2024-11-17 14:37:02.347145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.184 [2024-11-17 14:37:02.347179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.184 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.347375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.347411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.347604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.347637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.347831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.347866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.348066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.348098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.348369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.348405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.348558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.348593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.348796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.348829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.349043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.349077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.349335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.349419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.349677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.349711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.349907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.349941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.350195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.350229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.350417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.350453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.350671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.350705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.350898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.350933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.351228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.351261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.351468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.351504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.351649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.351684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.351990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.352022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.352156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.352189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.352410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.352447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.352657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.352692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.352882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.352917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.353133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.353166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.353378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.353414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.353624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.353658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.353786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.353820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.353964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.353998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.354208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.354243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.354504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.354542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.354804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.354845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.354997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.355031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.355226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.355261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.355465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.355501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.355704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.355737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.355878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.355913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.356211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.356245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.185 qpair failed and we were unable to recover it. 00:27:13.185 [2024-11-17 14:37:02.356373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.185 [2024-11-17 14:37:02.356409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.186 qpair failed and we were unable to recover it. 00:27:13.186 [2024-11-17 14:37:02.356600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.186 [2024-11-17 14:37:02.356633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.186 qpair failed and we were unable to recover it. 00:27:13.186 [2024-11-17 14:37:02.356782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.186 [2024-11-17 14:37:02.356818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.186 qpair failed and we were unable to recover it. 00:27:13.186 [2024-11-17 14:37:02.357059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.186 [2024-11-17 14:37:02.357093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.186 qpair failed and we were unable to recover it. 00:27:13.186 [2024-11-17 14:37:02.357219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.186 [2024-11-17 14:37:02.357254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.186 qpair failed and we were unable to recover it. 00:27:13.186 [2024-11-17 14:37:02.357442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.186 [2024-11-17 14:37:02.357479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.186 qpair failed and we were unable to recover it. 00:27:13.186 [2024-11-17 14:37:02.357700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.186 [2024-11-17 14:37:02.357733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.186 qpair failed and we were unable to recover it. 00:27:13.186 [2024-11-17 14:37:02.357881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.186 [2024-11-17 14:37:02.357916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.186 qpair failed and we were unable to recover it. 00:27:13.186 [2024-11-17 14:37:02.358037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.186 [2024-11-17 14:37:02.358072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.186 qpair failed and we were unable to recover it. 00:27:13.186 [2024-11-17 14:37:02.358365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.186 [2024-11-17 14:37:02.358401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.186 qpair failed and we were unable to recover it. 00:27:13.186 [2024-11-17 14:37:02.358513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.186 [2024-11-17 14:37:02.358546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.186 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.358740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.358776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.358914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.358947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.359187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.359220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.359492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.359527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.359760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.359797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.359929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.359962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.360148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.360181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.360489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.360525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.360671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.360706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.360896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.360977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.361208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.361247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.361473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.361514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.361661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.361695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.361901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.361935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.362077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.362111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.362229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.362263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.362475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.362510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.362626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.362660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.362861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.362896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.363128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.363164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.363424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.363462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.363652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.363685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.363838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.363873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.364198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.465 [2024-11-17 14:37:02.364234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.465 qpair failed and we were unable to recover it. 00:27:13.465 [2024-11-17 14:37:02.364448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.364484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.364607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.364640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.364820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.364854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.365136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.365169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.365377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.365413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.365601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.365635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.365837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.365871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.366078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.366111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.366373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.366408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.366674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.366707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.366931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.366965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.367187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.367220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.367480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.367523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.367718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.367752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.368028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.368061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.368341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.368387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.368652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.368686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.368943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.368975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.369154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.369189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.369441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.369478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.369629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.369664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.369852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.369885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.370182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.370215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.370512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.370547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.370766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.370800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.370934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.370969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.371172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.371207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.371456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.371491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.371682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.371715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.371971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.372004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.372196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.372229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.372440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.372477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.372672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.372708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.372904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.372938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.373152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.373187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.373376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.373411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.373678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.373713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.373852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.373885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.466 qpair failed and we were unable to recover it. 00:27:13.466 [2024-11-17 14:37:02.374137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.466 [2024-11-17 14:37:02.374172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.374369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.374410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.374688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.374723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.374925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.374958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.375192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.375224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.375481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.375517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.375737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.375772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.375918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.375952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.376156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.376190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.376323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.376365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.376521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.376557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.376854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.376889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.377085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.377120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.377376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.377412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.377622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.377655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.377915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.377949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.378147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.378181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.378449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.378484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.378702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.378737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.378918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.378952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.379148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.379181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.379305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.379339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.379543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.379577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.379857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.379891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.380085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.380118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.380330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.380373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.380569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.380603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.380846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.380879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.381079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.381118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.381315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.381349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.381589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.381624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.381838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.381873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.382056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.382092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.382293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.382326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.382530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.382565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.382840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.382875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.383068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.383101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.383312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.383346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.467 qpair failed and we were unable to recover it. 00:27:13.467 [2024-11-17 14:37:02.383664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.467 [2024-11-17 14:37:02.383697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.383832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.383865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.384090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.384125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.384404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.384439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.384651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.384684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.384824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.384857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.385063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.385097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.385366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.385401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.385516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.385550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.385827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.385859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.386141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.386174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.386432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.386467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.386768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.386801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.387085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.387118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.387303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.387336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.387547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.387581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.387732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.387766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.387973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.388008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.388222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.388256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.388527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.388562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.388694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.388727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.388999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.389033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.389215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.389248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.389452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.389487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.389679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.389713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.389907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.389940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.390157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.390191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.390496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.390531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.390684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.390717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.390899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.390932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.391129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.391162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.391312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.391345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.391619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.391653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.391841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.391876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.392075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.392108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.392295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.392329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.392596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.392631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.392757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.392790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.468 qpair failed and we were unable to recover it. 00:27:13.468 [2024-11-17 14:37:02.393052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.468 [2024-11-17 14:37:02.393085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.393290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.393323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.393518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.393552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.393803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.393837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.394038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.394071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.394273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.394307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.394574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.394608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.394892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.394926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.395205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.395239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.395472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.395508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.395778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.395812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.396072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.396105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.396365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.396400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.396704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.396738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.397043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.397077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.397274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.397308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.397596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.397631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.397903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.397937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.398140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.398174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.398309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.398342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.398582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.398621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.398753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.398787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.399001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.399033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.399241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.399276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.399534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.399570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.399755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.399789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.400070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.400104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.400226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.400260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.400516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.400551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.400758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.400791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.400916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.400950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.401131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.401164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.401359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.469 [2024-11-17 14:37:02.401394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.469 qpair failed and we were unable to recover it. 00:27:13.469 [2024-11-17 14:37:02.401686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.401720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.401981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.402015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.402217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.402251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.402520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.402554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.402684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.402718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.402924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.402957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.403233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.403266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.403449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.403484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.403703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.403736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.404008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.404041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.404304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.404338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.404640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.404674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.404865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.404897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.405091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.405124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.405403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.405447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.405640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.405674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.405943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.405978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.406186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.406219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.406424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.406460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.406653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.406688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.406898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.406930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.407206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.407240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.407526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.407561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.407762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.407796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.408000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.408033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.408282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.408316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.408602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.408636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.408942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.408976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.409244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.409278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.409558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.409594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.409878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.409912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.410191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.410224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.410512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.410547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.410823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.410857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.411062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.411097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.411297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.411330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.411544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.411577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.470 [2024-11-17 14:37:02.411771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.470 [2024-11-17 14:37:02.411804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.470 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.412013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.412047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.412274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.412308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.412606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.412642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.412825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.412858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.413074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.413108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.413370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.413406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.413645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.413679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.413950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.413984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.414238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.414273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.414531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.414567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.414781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.414814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.415091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.415124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.415303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.415338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.415556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.415589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.415866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.415898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.416093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.416126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.416399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.416433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.416634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.416668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.416920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.416954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.417250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.417282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.417485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.417520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.417800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.417832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.418134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.418168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.418431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.418466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.418768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.418802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.419001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.419035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.419241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.419274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.419454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.419489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.419672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.419705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.419919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.419953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.420077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.420107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.420387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.420423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.420696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.420730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.420856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.420890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.421166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.421200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.421448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.421484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.421684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.421718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.421968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.471 [2024-11-17 14:37:02.422001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.471 qpair failed and we were unable to recover it. 00:27:13.471 [2024-11-17 14:37:02.422305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.422338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.422633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.422667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.422877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.422910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.423207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.423241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.423496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.423531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.423732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.423766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.423880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.423920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.424130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.424163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.424373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.424408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.424686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.424719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.424898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.424931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.425135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.425168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.425441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.425476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.425770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.425804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.426024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.426058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.426261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.426293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.426568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.426602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.426855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.426888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.427162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.427196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.427476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.427512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.427809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.427844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.428058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.428091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.428371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.428406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.428541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.428576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.428783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.428816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.429129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.429163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.429372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.429407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.429603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.429637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.429891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.429925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.430115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.430149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.430332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.430384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.430658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.430691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.430818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.430850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.431126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.431164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.431431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.431466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.431682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.431715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.431917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.431950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.432202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.432236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.472 qpair failed and we were unable to recover it. 00:27:13.472 [2024-11-17 14:37:02.432449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.472 [2024-11-17 14:37:02.432484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.432747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.432781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.433080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.433115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.433295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.433329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.433593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.433628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.433812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.433847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.434157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.434190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.434489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.434525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.434749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.434783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.434977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.435011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.435195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.435228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.435418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.435455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.435751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.435785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.435987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.436021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.436209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.436244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.436438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.436472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.436680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.436713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.436906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.436940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.437213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.437246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.437487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.437522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.437844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.437877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.438057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.438091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.438284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.438323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.438528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.438562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.438786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.438820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.439028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.439062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.439258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.439291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.439553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.439589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.439863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.439897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.440147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.440181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.440387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.440423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.440673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.440707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.441007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.441040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.441287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.441320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.441614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.441649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.441920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.441953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.442213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.442246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.442448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.442484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.442665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.473 [2024-11-17 14:37:02.442697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.473 qpair failed and we were unable to recover it. 00:27:13.473 [2024-11-17 14:37:02.442921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.442955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.443146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.443180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.443436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.443471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.443669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.443702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.443975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.444009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.444195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.444228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.444494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.444530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.444736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.444769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.444897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.444931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.445070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.445103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.445361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.445396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.445697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.445731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.445859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.445892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.446143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.446176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.446374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.446411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.446683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.446717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.446899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.446932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.447206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.447239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.447517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.447552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.447834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.447867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.448146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.448180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.448389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.448425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.448546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.448579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.448854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.448887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.449174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.449208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.449395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.449431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.449683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.449716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.449844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.449877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.449998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.450031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.450312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.450346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.450542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.450576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.450877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.450910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.451056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.451090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.474 [2024-11-17 14:37:02.451291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.474 [2024-11-17 14:37:02.451325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.474 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.451521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.451555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.451804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.451838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.452120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.452153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.452350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.452395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.452581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.452615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.452881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.452914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.453166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.453201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.453502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.453537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.453832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.453865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.454077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.454111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.454391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.454426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.454679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.454712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.454935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.454969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.455167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.455202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.455502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.455536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.455796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.455829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.455957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.455989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.456172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.456211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.456479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.456513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.456705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.456738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.456918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.456952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.457223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.457256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.457436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.457471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.457700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.457733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.458003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.458035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.458257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.458290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.458574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.458609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.458869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.458902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.459116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.459149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.459409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.459444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.459665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.459698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.459982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.460016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.460296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.460328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.460545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.460578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.460832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.460865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.461169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.461203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.461325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.461369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.475 qpair failed and we were unable to recover it. 00:27:13.475 [2024-11-17 14:37:02.461577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.475 [2024-11-17 14:37:02.461610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.461862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.461895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.462203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.462236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.462510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.462545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.462734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.462767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.462961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.462994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.463247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.463280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.463473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.463513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.463732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.463766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.463883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.463917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.464195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.464229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.464479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.464513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.464695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.464729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.464978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.465010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.465189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.465222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.465505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.465539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.465652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.465684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.465891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.465925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.466037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.466067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.466322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.466379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.466583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.466616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.466822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.466856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.467130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.467163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.467435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.467470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.467755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.467789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.467973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.468006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.468207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.468240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.468420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.468455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.468643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.468677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.468884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.468918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.469175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.469208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.469434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.469469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.469657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.469690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.469947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.469981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.470172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.470206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.470347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.470393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.470623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.470657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.470929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.470963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.476 [2024-11-17 14:37:02.471180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.476 [2024-11-17 14:37:02.471212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.476 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.471411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.471445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.471632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.471666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.471866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.471899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.472179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.472212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.472395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.472430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.472743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.472776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.473052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.473086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.473264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.473297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.473572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.473607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.473841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.473875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.474155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.474187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.474315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.474349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.474589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.474622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.474768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.474800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.474991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.475023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.475298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.475332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.475537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.475571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.475765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.475798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.476060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.476093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.476220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.476253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.476454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.476489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.476711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.476745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.477019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.477052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.477261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.477294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.477590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.477625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.477756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.477788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.478004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.478037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.478224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.478257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.478379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.478414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.478691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.478724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.478999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.479032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.479163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.479196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.479449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.479484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.479687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.479721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.479913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.479946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.480225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.480257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.480408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.480448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.480640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.477 [2024-11-17 14:37:02.480674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.477 qpair failed and we were unable to recover it. 00:27:13.477 [2024-11-17 14:37:02.480946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.480979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.481158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.481190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.481381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.481417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.481670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.481703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.481924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.481957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.482184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.482217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.482345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.482399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.482681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.482715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.482901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.482933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.483121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.483154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.483332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.483378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.483657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.483690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.483948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.483981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.484283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.484316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.484530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.484565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.484821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.484854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.485049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.485082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.485202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.485237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.485377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.485412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.485621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.485654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.485856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.485889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.486194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.486227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.486423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.486457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.486583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.486617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.486890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.486923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.487104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.487143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.487370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.487406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.487632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.487665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.487877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.487910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.488207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.488240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.488445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.488479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.488595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.488628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.488903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.488936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.489252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.489285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.489441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.489475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.489732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.489765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.489988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.490022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.490227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.478 [2024-11-17 14:37:02.490259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.478 qpair failed and we were unable to recover it. 00:27:13.478 [2024-11-17 14:37:02.490547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.490582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.490807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.490841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.491057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.491090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.491214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.491247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.491441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.491475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.491730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.491764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.491957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.491990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.492269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.492302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.492447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.492481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.492685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.492718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.492917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.492950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.493227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.493260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.493448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.493483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.493660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.493693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.493895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.493934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.494118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.494151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.494285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.494318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.494463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.494498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.494697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.494730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.494918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.494950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.495127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.495160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.495367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.495402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.495623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.495657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.495922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.495956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.496178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.496211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.496476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.496511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.496695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.496730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.496920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.496953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.497235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.497268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.497478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.497514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.497785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.497818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.498007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.498040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.498310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.498343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.498492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.479 [2024-11-17 14:37:02.498526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.479 qpair failed and we were unable to recover it. 00:27:13.479 [2024-11-17 14:37:02.498775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.498810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.499115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.499149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.499410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.499444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.499575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.499609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.499812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.499845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.500120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.500155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.500341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.500386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.500640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.500673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.500905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.500938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.501153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.501186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.501311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.501345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.501565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.501601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.501737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.501771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.502046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.502081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.502287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.502320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.502519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.502553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.502692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.502727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.502921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.502957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.503069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.503102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.503316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.503350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.503495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.503530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.503754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.503788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.504062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.504095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.504296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.504329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.504644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.504678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.504861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.504895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.505172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.505206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.505498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.505534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.505698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.505731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.506057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.506091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.506393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.506429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.506700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.506733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.506883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.506915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.507043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.507078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.507280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.507313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.507537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.507572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.507822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.507855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.508049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.508084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.480 qpair failed and we were unable to recover it. 00:27:13.480 [2024-11-17 14:37:02.508268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.480 [2024-11-17 14:37:02.508300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.508523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.508559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.508694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.508728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.508850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.508883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.509129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.509162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.509441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.509476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.509754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.509787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.509929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.509964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.510215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.510248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.510448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.510482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.510611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.510651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.510908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.510943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.511223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.511255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.511450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.511485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.511609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.511641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.511970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.512004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.512286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.512321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.512649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.512685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.512899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.512933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.513121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.513154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.513348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.513397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.513611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.513645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.513846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.513879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.514061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.514094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.514279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.514314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.514458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.514493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.514768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.514803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.515054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.515088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.515300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.515333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.515624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.515658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.515865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.515898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.516085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.516119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.516331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.516376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.516591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.516623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.516770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.516804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.516996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.517030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.517219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.517252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.517461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.517507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.517763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.481 [2024-11-17 14:37:02.517798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.481 qpair failed and we were unable to recover it. 00:27:13.481 [2024-11-17 14:37:02.518050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.518083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.518395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.518430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.518565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.518599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.518902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.518935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.519149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.519183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.519384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.519419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.519683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.519715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.519919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.519953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.520159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.520193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.520467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.520502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.520704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.520737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.521004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.521039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.521307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.521341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.521652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.521688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.521941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.521975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.522193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.522226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.522505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.522542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.522676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.522709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.522960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.522994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.523183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.523215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.523420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.523456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.523663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.523695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.523895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.523929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.524039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.524073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.524300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.524334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.524591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.524628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.524832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.524866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.524991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.525025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.525306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.525339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.525514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.525547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.525748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.525782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.525960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.525994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.526116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.526150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.526405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.526439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.526585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.526619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.526760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.526795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.527050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.527083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.527274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.527309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.482 qpair failed and we were unable to recover it. 00:27:13.482 [2024-11-17 14:37:02.527508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.482 [2024-11-17 14:37:02.527542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.527762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.527796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.527990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.528023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.528177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.528211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.528426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.528462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.528664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.528698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.528880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.528913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.529167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.529201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.529503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.529537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.529744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.529777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.529985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.530018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.530164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.530196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.530413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.530448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.530711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.530746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.530965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.530998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.531284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.531317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.531515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.531551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.531854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.531887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.532151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.532184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.532398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.532433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.532643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.532677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.532932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.532965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.533223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.533259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.533413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.533449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.533635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.533668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.533971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.534005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.534188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.534222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.534416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.534451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.534660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.534699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.534854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.534890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.535184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.535218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.535482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.535517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.535737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.535771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.535949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.483 [2024-11-17 14:37:02.535982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.483 qpair failed and we were unable to recover it. 00:27:13.483 [2024-11-17 14:37:02.536182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.536217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.536391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.536425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.536612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.536646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.536870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.536905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.537204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.537238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.537507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.537542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.537749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.537782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.537994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.538029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.538233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.538267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.538547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.538582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.538868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.538903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.539112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.539146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.539403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.539437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.539690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.539723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.539862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.539896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.540019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.540051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.540248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.540282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.540557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.540592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.540779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.540812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.541082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.541118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.541396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.541431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.541710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.541751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.542035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.542071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.542344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.542391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.542533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.542567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.542818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.542852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.543060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.543093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.543237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.543272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.543583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.543619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.543823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.543856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.544076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.544110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.544312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.544346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.544497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.544531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.544795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.544829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.545033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.545066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.545194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.545227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.545426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.545462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.545720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.545755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.484 qpair failed and we were unable to recover it. 00:27:13.484 [2024-11-17 14:37:02.545978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.484 [2024-11-17 14:37:02.546012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.546194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.546230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.546432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.546467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.546743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.546779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.547031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.547064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.547194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.547228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.547418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.547456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.547654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.547687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.547880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.547914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.548063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.548097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.548315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.548385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.548520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.548551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.548758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.548793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.549068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.549104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.549368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.549403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.549628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.549663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.549896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.549930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.550129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.550164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.550349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.550398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.550601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.550636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.550824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.550857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.551055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.551089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.551277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.551312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.551530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.551565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.551915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.551993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.552321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.552375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.552569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.552604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.552815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.552851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.553104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.553138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.553331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.553379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.553564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.553598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.553735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.553768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.554027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.554061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.554248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.554284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.554492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.554529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.554780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.554815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.555009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.555041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.555262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.555308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.485 [2024-11-17 14:37:02.555443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.485 [2024-11-17 14:37:02.555478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.485 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.555708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.555741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.555939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.555972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.556106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.556138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.556396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.556431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.556575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.556610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.556864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.556897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.557168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.557202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.557485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.557518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.557708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.557740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.557870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.557906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.558018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.558050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.558268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.558301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.558528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.558564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.558841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.558874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.559001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.559035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.559236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.559270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.559402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.559436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.559661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.559694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.559896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.559932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.560151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.560183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.560300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.560333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.560494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.560530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.560795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.560828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.560973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.561009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.561129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.561164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.561305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.561340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.561484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.561518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.561712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.561745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.561952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.561985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.562166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.562201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.562335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.562378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.562507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.562542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.562745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.562778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.562970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.563004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.563188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.563221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.563343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.563390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.563524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.563558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.486 [2024-11-17 14:37:02.563780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.486 [2024-11-17 14:37:02.563812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.486 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.564011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.564050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.564177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.564212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.564346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.564401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.564514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.564547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.564673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.564706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.564897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.564931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.565137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.565171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.565349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.565393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.565676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.565711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.565902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.565934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.566062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.566095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.566376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.566411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.566525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.566558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.566678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.566711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.566899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.566932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.567050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.567083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.567297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.567331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.567467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.567501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.567793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.567826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.567959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.567992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.568183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.568218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.568425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.568459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.568739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.568773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.568891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.568923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.569037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.569070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.569197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.569231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.569379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.569413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.569600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.569633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.569911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.569944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.570158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.570194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.570304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.570338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.570471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.570505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.570807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.570842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.571110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.571144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.571461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.571496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.571756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.571789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.571921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.571955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.487 [2024-11-17 14:37:02.572091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.487 [2024-11-17 14:37:02.572124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.487 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.572305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.572339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.572617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.572653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.572835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.572873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.573140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.573174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.573293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.573327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.573540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.573576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.573724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.573756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.573940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.573974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.574181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.574215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.574401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.574436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.574638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.574675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.574862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.574894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.575089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.575122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.575312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.575346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.575545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.575580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.575795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.575829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.576031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.576065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.576361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.576396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.576551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.576584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.576740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.576775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.577013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.577045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.577187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.577223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.577371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.577406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.577626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.577660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.577806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.577839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.578021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.578054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.578250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.578284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.578482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.578519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.578640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.578674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.578931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.578966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.579170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.579202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.579392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.579427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.488 [2024-11-17 14:37:02.579639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.488 [2024-11-17 14:37:02.579672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.488 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.579877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.579911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.580130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.580175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.580308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.580340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.580492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.580525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.580648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.580682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.580883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.580917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.581039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.581071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.581260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.581293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.581453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.581488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.581689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.581726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.581906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.581939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.582127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.582159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.582365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.582400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.582525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.582557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.582684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.582718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.582851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.582884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.583078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.583110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.583380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.583415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.583614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.583648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.583832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.583864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.584049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.584082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.584272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.584305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.584531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.584565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.584759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.584792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.585059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.585092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.585291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.585324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.585456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.585491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.585674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.585707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.585820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.585852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.585977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.586011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.586193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.586226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.586430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.586463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.586741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.586773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.586972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.587006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.587197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.587229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.587336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.587381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.587512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.587546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.489 [2024-11-17 14:37:02.587755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.489 [2024-11-17 14:37:02.587789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.489 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.587908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.587941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.588169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.588202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.588330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.588386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.588507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.588539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.588749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.588781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.589054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.589089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.589309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.589344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.589471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.589505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.589641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.589673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.589943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.589976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.590164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.590198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.590400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.590440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.590625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.590657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.590928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.590980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.591109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.591142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.591283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.591315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.591517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.591551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.591731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.591764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.591906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.591938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.592185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.592218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.592417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.592451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.592739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.592773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.593041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.593074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.593201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.593233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.593418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.593452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.593654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.593686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.593880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.593912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.594046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.594079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.594215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.594247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.594431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.594465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.594617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.594649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.594834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.594867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.594988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.595020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.595266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.595301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.595507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.595541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.595723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.595755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.595900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.595932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.490 [2024-11-17 14:37:02.596177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.490 [2024-11-17 14:37:02.596210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.490 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.596417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.596466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.596602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.596636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.596869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.596903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.597041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.597074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.597345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.597388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.597574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.597609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.597748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.597783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.597915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.597949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.598197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.598231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.598503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.598538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.598797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.598829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.599015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.599049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.599239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.599271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.599401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.599434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.599623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.599657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.599801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.599836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.600023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.600057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.600181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.600214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.600344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.600385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.600513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.600546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.600738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.600770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.600949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.600983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.601180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.601214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.601483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.601516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.601712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.601746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.601868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.601900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.602043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.602076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.602375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.602411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.602614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.602646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.602837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.602869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.603111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.603146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.603290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.603325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.603520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.603553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.603665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.603698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.603854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.603887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.604022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.604055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.604232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.604266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.491 [2024-11-17 14:37:02.604463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.491 [2024-11-17 14:37:02.604497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.491 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.604694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.604729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.604918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.604950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.605149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.605187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.605314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.605347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.605469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.605503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.605631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.605664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.605772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.605805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.606005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.606038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.606218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.606250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.606438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.606471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.606649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.606682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.606813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.606845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.606971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.607003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.607119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.607153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.607348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.607389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.607641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.607674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.607795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.607828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.607939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.607971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.608145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.608179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.608400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.608433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.608570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.608603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.608778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.608810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.609070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.609102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.609296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.609329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.609465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.609497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.609709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.609742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.609861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.609894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.610038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.610069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.610180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.610211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.610508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.610541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.610679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.610712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.610828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.610860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.610988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.492 [2024-11-17 14:37:02.611020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.492 qpair failed and we were unable to recover it. 00:27:13.492 [2024-11-17 14:37:02.611195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.611227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.611416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.611451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.611587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.611621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.611739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.611771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.611951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.611984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.612109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.612140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.612337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.612408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.612546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.612578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.612757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.612791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.612992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.613030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.613282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.613314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.613584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.613617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.613745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.613778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.613885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.613917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.614161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.614193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.614303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.614335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.614529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.614562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.614825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.614859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.615034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.615069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.615255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.615287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.615483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.615516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.615715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.615748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.615990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.616022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.616155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.616187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.616404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.616442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.616579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.616611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.616743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.616776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.616907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.616939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.617139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.617172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.617347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.617387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.617582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.617614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.617924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.617956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.618077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.618109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.618303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.618335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.618530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.618562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.618684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.618718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.618932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.618966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.619194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.493 [2024-11-17 14:37:02.619226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.493 qpair failed and we were unable to recover it. 00:27:13.493 [2024-11-17 14:37:02.619368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.619404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.619601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.619634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.619765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.619796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.619932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.619965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.620137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.620169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.620287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.620319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.620607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.620701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.620870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.620907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.621170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.621204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.621424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.621460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.621677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.621710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.621953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.621995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.622262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.622296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.622598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.622632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.622808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.622843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.623126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.623160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.623337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.623378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.623508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.623542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.623651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.623684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.623879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.623911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.624038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.624071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.624272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.624305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.624450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.624484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.624773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.624806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.625010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.625041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.625224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.625256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.625387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.625421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.625542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.625575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.625847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.625879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.626061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.626095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.626233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.626266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.626456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.626490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.626773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.626806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.626924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.626955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.627196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.627229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.627465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.627498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.627627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.627661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.494 [2024-11-17 14:37:02.627836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.494 [2024-11-17 14:37:02.627867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.494 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.628002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.628036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.628227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.628260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.628510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.628546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.628672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.628706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.628882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.628916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.629052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.629085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.629217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.629251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.629406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.629441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.629559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.629594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.629715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.629748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.629977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.630009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.630481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.630523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.630670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.630703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.630828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.630868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.631042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.631075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.631328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.631373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.631557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.631589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.631882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.631915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.632200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.632233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.632371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.632404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.632528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.632561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.632681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.632713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.632818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.632850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.633042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.633074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.633254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.633287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.633484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.633518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.633706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.633739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.633867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.633900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.634018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.634051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.634168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.634200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.634378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.634414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.634590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.634623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.634820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.634853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.634970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.635003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.635121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.635154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.635375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.635410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.635546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.635579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.495 [2024-11-17 14:37:02.635755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.495 [2024-11-17 14:37:02.635788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.495 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.636036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.636069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.636173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.636205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.636421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.636456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.636568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.636600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.636719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.636752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.636861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.636901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.637086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.637118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.637234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.637266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.637455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.637489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.637603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.637636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.637821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.637854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.637964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.637996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.638105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.638138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.638275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.638312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.638517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.638552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.638680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.638719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.638914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.638947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.639157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.639190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.639400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.639434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.639569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.639603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.639791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.639826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.639946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.639987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.640133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.640165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.640286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.640331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.640469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.640502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.640762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.640797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.640917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.640950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.641123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.641157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.641261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.641294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.641433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.641468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.641596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.641629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.641804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.641837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.641944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.641977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.642178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.642212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.642345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.642391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.642582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.642614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.642788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.642821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.496 [2024-11-17 14:37:02.642995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.496 [2024-11-17 14:37:02.643029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.496 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.643145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.643178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.643290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.643323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.643518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.643552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.643744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.643776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.643964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.643997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.644171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.644204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.644312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.644342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.644621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.644654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.644831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.644864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.645122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.645155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.645375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.645409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.645518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.645551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.645806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.645838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.646036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.646068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.646185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.646218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.646414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.646448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.646564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.646597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.646837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.646876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.646995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.647028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.647276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.647309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.647492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.647526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.647705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.647737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.647909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.647942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.648119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.648152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.648337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.648376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.648547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.648579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.648772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.648805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.648914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.648947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.649123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.649156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.649334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.649396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.649681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.649714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.649896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.649929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.650102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.650134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.497 qpair failed and we were unable to recover it. 00:27:13.497 [2024-11-17 14:37:02.650319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.497 [2024-11-17 14:37:02.650365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.650467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.650499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.650616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.650649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.650897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.650928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.651113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.651145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.651338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.651380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.651577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.651609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.651781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.651813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.651914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.651946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.652182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.652214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.652396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.652429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.652609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.652642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.652836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.652868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.653105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.653137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.653309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.653343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.653458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.653490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.653754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.653787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.653912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.653944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.654147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.654179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.654370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.654403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.654519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.654552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.654659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.654692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.654874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.654906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.655080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.655113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.655226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.655264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.655449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.655483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.655599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.655631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.655803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.655836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.656100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.656132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.656237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.656268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.656479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.656513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.656628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.656658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.656765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.656797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.657056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.657088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.657257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.657290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.657472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.657505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.657621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.657654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.657836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.657869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.498 qpair failed and we were unable to recover it. 00:27:13.498 [2024-11-17 14:37:02.657995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.498 [2024-11-17 14:37:02.658027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.658222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.658255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.658437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.658471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.658644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.658676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.658866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.658898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.659019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.659051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.659153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.659185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.659307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.659339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.659536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.659570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.659754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.659785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.659995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.660028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.660214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.660247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.660453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.660487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.660614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.660647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.660771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.660803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.661010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.661043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.661239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.661271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.661443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.661476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.661648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.661680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.661813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.661846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.662015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.662047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.662240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.662272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.662538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.662571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.662809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.662841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.663061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.663092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.663284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.663316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.663507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.663546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.663729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.663761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.663964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.663997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.664182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.664214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.664400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.664433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.664608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.664640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.664750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.664781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.664959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.664991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.665166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.665202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.665390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.665424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.665597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.665631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.665821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.665853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.499 qpair failed and we were unable to recover it. 00:27:13.499 [2024-11-17 14:37:02.666031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.499 [2024-11-17 14:37:02.666063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.500 qpair failed and we were unable to recover it. 00:27:13.500 [2024-11-17 14:37:02.666299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.500 [2024-11-17 14:37:02.666331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.500 qpair failed and we were unable to recover it. 00:27:13.500 [2024-11-17 14:37:02.666534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.500 [2024-11-17 14:37:02.666568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.500 qpair failed and we were unable to recover it. 00:27:13.500 [2024-11-17 14:37:02.666687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.500 [2024-11-17 14:37:02.666720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.500 qpair failed and we were unable to recover it. 00:27:13.500 [2024-11-17 14:37:02.666892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.500 [2024-11-17 14:37:02.666925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.500 qpair failed and we were unable to recover it. 00:27:13.783 [2024-11-17 14:37:02.667100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.783 [2024-11-17 14:37:02.667133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.783 qpair failed and we were unable to recover it. 00:27:13.783 [2024-11-17 14:37:02.667242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.783 [2024-11-17 14:37:02.667274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.783 qpair failed and we were unable to recover it. 00:27:13.783 [2024-11-17 14:37:02.667387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.783 [2024-11-17 14:37:02.667419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.783 qpair failed and we were unable to recover it. 00:27:13.783 [2024-11-17 14:37:02.667605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.783 [2024-11-17 14:37:02.667639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.783 qpair failed and we were unable to recover it. 00:27:13.783 [2024-11-17 14:37:02.667764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.783 [2024-11-17 14:37:02.667796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.783 qpair failed and we were unable to recover it. 00:27:13.783 [2024-11-17 14:37:02.667996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.783 [2024-11-17 14:37:02.668029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.783 qpair failed and we were unable to recover it. 00:27:13.783 [2024-11-17 14:37:02.668291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.783 [2024-11-17 14:37:02.668324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.783 qpair failed and we were unable to recover it. 00:27:13.783 [2024-11-17 14:37:02.668514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.783 [2024-11-17 14:37:02.668548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.668719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.668752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.668942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.668975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.669167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.669199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.669404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.669439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.669570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.669603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.669707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.669739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.669943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.669976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.670176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.670209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.670420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.670453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.670649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.670682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.670866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.670898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.671023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.671056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.671176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.671208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.671329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.671368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.671626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.671660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.671920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.671958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.672079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.672112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.672358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.672392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.672524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.672557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.672664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.672696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.672817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.672850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.673020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.673052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.673218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.673250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.673437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.673471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.673593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.673625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.673733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.673763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.673888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.673921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.674044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.674076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.674278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.674311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.674520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.674554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.674726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.674759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.784 qpair failed and we were unable to recover it. 00:27:13.784 [2024-11-17 14:37:02.674941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.784 [2024-11-17 14:37:02.674973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.675153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.675185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.675364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.675397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.675594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.675627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.675878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.675911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.676090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.676123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.676256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.676288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.676473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.676507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.676690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.676723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.676895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.676927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.677118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.677150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.677415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.677450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.677728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.677759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.677933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.677966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.678247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.678280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.678464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.678519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.678707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.678740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.678927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.678960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.679129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.679162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.679377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.679412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.679604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.679638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.679739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.679771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.679879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.679912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.680148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.680181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.680480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.680520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.680638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.680671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.680795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.680846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.681022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.681054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.681259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.681291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.681563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.681596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.681771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.681804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.681986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.682018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.682264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.785 [2024-11-17 14:37:02.682296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.785 qpair failed and we were unable to recover it. 00:27:13.785 [2024-11-17 14:37:02.682476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.682510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.682691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.682724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.682892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.682924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.683093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.683125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.683292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.683325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.683456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.683489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.683666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.683699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.683959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.683990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.684167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.684199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.684315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.684347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.684476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.684509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.684696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.684729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.684848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.684881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.685012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.685044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.685281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.685312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.685470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.685504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.685705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.685736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.685933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.685966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.686198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.686231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.686470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.686504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.686619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.686652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.686889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.686922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.687030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.687063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.687196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.687228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.687414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.687448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.687643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.687675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.687784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.687817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.688003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.688035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.688209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.688241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.688426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.688459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.688726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.688757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.688872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.688909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.689029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.689061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.689185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.689217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.689343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.786 [2024-11-17 14:37:02.689394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.786 qpair failed and we were unable to recover it. 00:27:13.786 [2024-11-17 14:37:02.689583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.689615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.689791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.689823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.690009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.690040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.690274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.690305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.690429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.690463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.690668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.690701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.690820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.690853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.691031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.691063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.691263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.691296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.691426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.691460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.691602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.691634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.691823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.691856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.692174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.692206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.692400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.692434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.692541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.692572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.692806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.692838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.693022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.693053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.693178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.693211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.693329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.693370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.693552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.693585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.693694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.693726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.693890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.693922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.694197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.694229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.694499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.694532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.694720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.694752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.694960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.694992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.695118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.695150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.695260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.695293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.695505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.695538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.695778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.695810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.695925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.695957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.696147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.696180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.696310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.696341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.696537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.696570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.696750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.696783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.696912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-11-17 14:37:02.696943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.787 qpair failed and we were unable to recover it. 00:27:13.787 [2024-11-17 14:37:02.697118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.697156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.697273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.697305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.697505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.697539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.697777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.697809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.697986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.698018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.698136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.698168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.698374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.698409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.698594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.698626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.698743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.698774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.698886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.698919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.699044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.699075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.699273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.699305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.699444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.699476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.699579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.699611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.699805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.699836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.699974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.700007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.700250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.700282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.700403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.700437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.700568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.700600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.700716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.700749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.700928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.700960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.701128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.701160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.701338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.701379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.701558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.701590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.701758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.701789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.701973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.788 [2024-11-17 14:37:02.702006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.788 qpair failed and we were unable to recover it. 00:27:13.788 [2024-11-17 14:37:02.702186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.702217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.702429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.702462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.702723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.702755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.702930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.702962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.703212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.703244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.703416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.703447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.703662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.703694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.703819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.703851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.704057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.704089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.704326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.704366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.704608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.704640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.704754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.704786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.705021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.705053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.705234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.705266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.705472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.705506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.705630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.705662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.705845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.705877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.705988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.706019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.706135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.706167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.706289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.706321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.706533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.706568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.706739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.706770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.706888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.706920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.707092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.707124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.707375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.707410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.707645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.707677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.707797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.707829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.708009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.708041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.708237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.708269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.708532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.708566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.708743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.708775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.708905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.708937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.709052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.789 [2024-11-17 14:37:02.709084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.789 qpair failed and we were unable to recover it. 00:27:13.789 [2024-11-17 14:37:02.709277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.709310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.709493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.709526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.709769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.709802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.710050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.710082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.710247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.710279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.710457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.710490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.710727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.710759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.710875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.710907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.711073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.711109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.711229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.711261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.711521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.711554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.711796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.711829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.712104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.712136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.712323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.712375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.712578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.712610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.712785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.712817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.713056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.713087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.713264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.713296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.713431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.713464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.713704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.713735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.713915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.713946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.714111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.714144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.714272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.714304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.714434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.714467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.714589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.714622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.714750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.714782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.714967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.714999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.715183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.715215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.715394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.715428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.715559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.715591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.715791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.715824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.716010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.716042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.716162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.716194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.716416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.716448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.716634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.716666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.716794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.716826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.717015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.717048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.790 [2024-11-17 14:37:02.717217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.790 [2024-11-17 14:37:02.717248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.790 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.717487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.717521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.717649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.717681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.717927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.717959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.718127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.718159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.718286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.718318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.718497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.718531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.718719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.718751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.718870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.718901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.719014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.719046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.719223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.719255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.719441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.719481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.719679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.719710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.719882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.719915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.720196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.720228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.720346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.720386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.720494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.720526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.720658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.720690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.720870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.720902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.721088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.721122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.721310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.721341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.721538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.721572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.721748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.721779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.721968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.722000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.722206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.722238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.722413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.722446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.722622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.722654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.722820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.722852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.723037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.723069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.723238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.723270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.723387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.723421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.723525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.723556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.723682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.723714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.723973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.791 [2024-11-17 14:37:02.724005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.791 qpair failed and we were unable to recover it. 00:27:13.791 [2024-11-17 14:37:02.724117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.724149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.724318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.724350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.724492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.724524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.724710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.724743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.724881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.724913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.725082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.725114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.725291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.725324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.725453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.725485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.725654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.725686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.725853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.725885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.725989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.726021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.726257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.726289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.726550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.726583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.726756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.726788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.726957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.726988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.727217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.727250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.727374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.727407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.727578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.727615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.727829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.727861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.728040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.728071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.728258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.728290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.728460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.728493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.728683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.728715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.729000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.729032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.729219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.729251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.729389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.729423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.729661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.729694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.729866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.729899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.730070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.730102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.730280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.730311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.730432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.730467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.730656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.730687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.730866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.730898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.731132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.731164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.792 [2024-11-17 14:37:02.731348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.792 [2024-11-17 14:37:02.731390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.792 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.731598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.731630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.731807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.731838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.732031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.732064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.732176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.732208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.732394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.732428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.732559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.732591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.732722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.732754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.732956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.732988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.733105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.733137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.733259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.733292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.733516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.733549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.733789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.733821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.734000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.734033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.734213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.734245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.734413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.734447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.734632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.734665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.734787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.734818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.735004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.735036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.735247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.735279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.735410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.735443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.735545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.735577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.735706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.735738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.735856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.735893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.736077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.736109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.736288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.736319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.736452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.736485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.736721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.736753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.737015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.737047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.737166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.737198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.737399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.737434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.737537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.793 [2024-11-17 14:37:02.737569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.793 qpair failed and we were unable to recover it. 00:27:13.793 [2024-11-17 14:37:02.737751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.737783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.737885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.737917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.738174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.738205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.738323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.738362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.738530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.738563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.738669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.738701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.738871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.738902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.739079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.739110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.739233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.739264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.739443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.739477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.739741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.739773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.739890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.739921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.740109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.740141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.740271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.740303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.740496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.740529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.740718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.740750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.740929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.740961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.741154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.741186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.741325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.741384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.741567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.741599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.741711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.741743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.741978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.742010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.742192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.742224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.742459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.742493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.742596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.742629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.742799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.742830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.743027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.743058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.743247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.743279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.743404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.743436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.743606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.743638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.743824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.743856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.743961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.744003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.744123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.794 [2024-11-17 14:37:02.744155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.794 qpair failed and we were unable to recover it. 00:27:13.794 [2024-11-17 14:37:02.744331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.744369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.744559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.744591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.744772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.744802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.744919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.744951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.745202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.745233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.745493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.745526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.745641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.745674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.745924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.745956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.746085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.746117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.746346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.746389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.746510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.746541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.746751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.746783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.746928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.746960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.747155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.747187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.747451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.747484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.747720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.747753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.747872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.747904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.748028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.748058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.748179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.748211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.748334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.748375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.748567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.748599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.748740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.748772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.748964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.748995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.749123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.749158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.749343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.749382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.749595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.749627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.749765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.749798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.750055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.750086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.750270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.750303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.750497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.750531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.750771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.750802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.750992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.795 [2024-11-17 14:37:02.751024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.795 qpair failed and we were unable to recover it. 00:27:13.795 [2024-11-17 14:37:02.751150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.751182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.751443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.751475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.751592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.751624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.751794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.751825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.752008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.752041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.752156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.752189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.752304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.752341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.752521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.752554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.752733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.752765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.752905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.752937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.753141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.753173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.753348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.753390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.753558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.753590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.753777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.753809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.753930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.753962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.754136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.754168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.754348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.754398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.754529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.754561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.754736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.754767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.754887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.754918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.755040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.755074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.755308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.755339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.755476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.755509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.755616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.755649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.755829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.755861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.756030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.756062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.756232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.756264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.756463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.756497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.756623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.756655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.756841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.756874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.757066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.757098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.757226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.757258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.757430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.757463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.757732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.757764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.757958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.796 [2024-11-17 14:37:02.757990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.796 qpair failed and we were unable to recover it. 00:27:13.796 [2024-11-17 14:37:02.758119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.758151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.758267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.758298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.758412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.758445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.758620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.758653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.758923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.758955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.759086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.759118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.759307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.759339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.759565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.759597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.759706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.759739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.759871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.759903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.760025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.760057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.760237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.760275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.760445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.760478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.760666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.760698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.760880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.760913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.761101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.761133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.761394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.761428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.761601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.761633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.761760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.761791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.761959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.761991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.762176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.762211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.762388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.762423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.762545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.762578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.762791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.762824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.763008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.763041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.763242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.763275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.763489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.763524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.763781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.797 [2024-11-17 14:37:02.763815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.797 qpair failed and we were unable to recover it. 00:27:13.797 [2024-11-17 14:37:02.763941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.763974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.764080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.764114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.764309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.764341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.764463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.764496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.764675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.764708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.764820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.764852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.764983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.765018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.765213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.765246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.765416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.765450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.765641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.765674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.765800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.765833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.765967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.766001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.766173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.766206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.766397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.766435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.766616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.766648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.766833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.766865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.766977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.767010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.767210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.767242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.767386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.767418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.767663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.767697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.767886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.767918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.768090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.768121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.768236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.768268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.768457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.768497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.768600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.768630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.768816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.768847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.769102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.769136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.769326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.769369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.769485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.769518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.769699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.769731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.769838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.769870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.769984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.770016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.770151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.770184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.798 [2024-11-17 14:37:02.770318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.798 [2024-11-17 14:37:02.770363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.798 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.770485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.770518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.770765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.770799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.770988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.771020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.771147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.771181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.771318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.771350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.771478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.771512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.771681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.771715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.771909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.771941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.772114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.772147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.772317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.772349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.772537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.772571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.772752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.772785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.772968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.773001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.773191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.773224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.773452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.773486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.773672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.773704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.773820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.773852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.774119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.774152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.774395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.774429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.774550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.774582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.774732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.774764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.774879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.774911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.775121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.775153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.775392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.775428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.775543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.775576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.775682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.775715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.775836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.775869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.799 [2024-11-17 14:37:02.776090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.799 [2024-11-17 14:37:02.776123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.799 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.776248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.776281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.776405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.776445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.776570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.776603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.776770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.776802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.776972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.777004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.777132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.777163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.777350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.777398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.777520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.777552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.777735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.777766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.777964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.777997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.778170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.778203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.778322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.778363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.778492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.778524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.778720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.778752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.778935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.778968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.779177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.779211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.779396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.779431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.779544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.779576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.779841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.779874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.779996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.780029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.780151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.780184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.780377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.780410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.780597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.780629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.780733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.780767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.780950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.780983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.781088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.781120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.781372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.781406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.781630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.781663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.781842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc4af0 is same with the state(6) to be set 00:27:13.800 [2024-11-17 14:37:02.782111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.782197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.782415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.782453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.782649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.782682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.782876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.782909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.783081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.783113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.783321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.800 [2024-11-17 14:37:02.783364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.800 qpair failed and we were unable to recover it. 00:27:13.800 [2024-11-17 14:37:02.783562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.783596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.783724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.783756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.783933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.783965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.784204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.784236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.784367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.784400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.784582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.784615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.784789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.784820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.785011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.785043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.785175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.785208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.785333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.785376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.785510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.785541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.785755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.785788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.785891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.785922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.786057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.786088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.786227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.786260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.786441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.786475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.786615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.786646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.786833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.786865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.786966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.786999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.787191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.787223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.787404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.787443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.787565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.787597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.787840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.787871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.788108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.788141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.788250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.788281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.788464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.788497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.788755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.788787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.788902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.788933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.789038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.789069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.789247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.789280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.789451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.789484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.789667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.789699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.789885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.789917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.790091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.790123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.790374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.790409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.801 [2024-11-17 14:37:02.790534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.801 [2024-11-17 14:37:02.790566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.801 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.790753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.790785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.791024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.791057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.791184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.791215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.791327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.791368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.791482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.791514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.791611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.791643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.791907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.791939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.792122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.792154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.792398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.792430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.792616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.792648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.792785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.792817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.792927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.792959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.793099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.793131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.793307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.793339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.793591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.793622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.793754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.793786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.793889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.793921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.794160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.794192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.794325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.794366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.794607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.794639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.794816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.794847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.795041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.795072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.795243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.795274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.795472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.795505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.795677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.795715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.795925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.795956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.796146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.796178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.796366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.796400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.796578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.796610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.796789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.796822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.797006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.797038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.797237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.797268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.797439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.802 [2024-11-17 14:37:02.797471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.802 qpair failed and we were unable to recover it. 00:27:13.802 [2024-11-17 14:37:02.797741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.797773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.797981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.798013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.798131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.798163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.798427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.798461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.798593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.798626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.798815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.798849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.799050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.799080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.799211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.799242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.799349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.799401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.799573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.799604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.799736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.799771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.799951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.799983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.800162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.800195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.800331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.800377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.800579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.800612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.800722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.800754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.800869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.800901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.801123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.801158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.801294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.801327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.801515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.801550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.801790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.801823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.802010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.802041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.802213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.802246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.802505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.802543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.802647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.802679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.802799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.802832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.802963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.802996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.803209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.803242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.803484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.803520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.803642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.803675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.803792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.803823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.804013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.804053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.804166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.804198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.804318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.804349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.803 qpair failed and we were unable to recover it. 00:27:13.803 [2024-11-17 14:37:02.804533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.803 [2024-11-17 14:37:02.804565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.804672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.804703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.804925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.804958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.805221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.805252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.805381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.805415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.805624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.805656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.805846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.805878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.806071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.806102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.806295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.806328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.806522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.806555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.806802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.806834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.807020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.807052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.807235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.807267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.807405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.807439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.807676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.807710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.807848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.807881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.808126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.808158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.808263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.808296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.808411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.808445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.808553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.808585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.808769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.808802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.808979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.809011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.809196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.809229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.809400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.809434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.809679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.809711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.809953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.809985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.810113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.810146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.810316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.810349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.810549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.810583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.804 [2024-11-17 14:37:02.810706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.804 [2024-11-17 14:37:02.810739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.804 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.810978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.811010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.811122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.811154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.811270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.811302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.811503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.811536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.811735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.811768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.811887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.811921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.812166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.812197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.812313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.812375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.812570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.812603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.812842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.812875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.813066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.813099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.813288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.813321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.813448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.813483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.813668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.813700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.813816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.813848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.814080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.814113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.814240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.814273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.814397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.814432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.814572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.814604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.814798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.814831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.815015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.815048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.815249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.815282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.815393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.815425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.815621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.815654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.815759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.815792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.815990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.816024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.816142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.816172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.816304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.816336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.816519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.816552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.816762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.816795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.816997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.817030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.817269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.817301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.817482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.817515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.817625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.817658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.817835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.817907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.818115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.805 [2024-11-17 14:37:02.818152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.805 qpair failed and we were unable to recover it. 00:27:13.805 [2024-11-17 14:37:02.818340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.818390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.818499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.818532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.818708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.818741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.818864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.818896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.819010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.819044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.819226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.819258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.819501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.819535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.819664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.819697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.819879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.819911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.820092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.820125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.820301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.820333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.820469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.820502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.820635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.820668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.820913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.820945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.821223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.821256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.821374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.821409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.821584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.821617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.821749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.821782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.821951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.821984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.822223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.822256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.822369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.822402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.822522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.822553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.822665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.822699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.822819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.822853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.823022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.823053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.823169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.823206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.823326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.823367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.823555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.823586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.823784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.823818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.823938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.823971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.824262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.824295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.824541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.824575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.824790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.824823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.824947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.824979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.806 qpair failed and we were unable to recover it. 00:27:13.806 [2024-11-17 14:37:02.825175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.806 [2024-11-17 14:37:02.825207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.825413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.825446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.825560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.825593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.825698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.825730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.825863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.825901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.826085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.826118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.826311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.826344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.826550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.826582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.826722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.826755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.827027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.827060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.827170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.827202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.827399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.827434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.827625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.827657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.827844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.827877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.828059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.828092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.828295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.828327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.828527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.828560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.828799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.828833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.829104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.829137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.829333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.829373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.829558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.829590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.829837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.829870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.830060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.830093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.830276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.830308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.830559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.830593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.830722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.830753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.830860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.830892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.831012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.831044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.831217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.831250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.831447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.831481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.831653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.831686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.831815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.831854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.832019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.832051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.832170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.832202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.807 qpair failed and we were unable to recover it. 00:27:13.807 [2024-11-17 14:37:02.832391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.807 [2024-11-17 14:37:02.832426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.832611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.832643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.832779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.832811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.833000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.833034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.833226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.833258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.833444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.833479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.833601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.833634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.833759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.833792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.833988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.834021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.834235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.834268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.834509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.834544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.834835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.834868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.835147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.835181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.835304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.835337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.835643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.835676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.835858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.835892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.836089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.836121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.836263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.836295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.836431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.836465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.836588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.836622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.836793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.836826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.837015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.837048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.837250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.837282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.837463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.837495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.837736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.837770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.837891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.837923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.838105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.838137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.838263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.838295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.838498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.838532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.838713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.838746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.838951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.838985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.839187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.839221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.839403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.839436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.839548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.808 [2024-11-17 14:37:02.839579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.808 qpair failed and we were unable to recover it. 00:27:13.808 [2024-11-17 14:37:02.839715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.839748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.839932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.839964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.840202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.840234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.840343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.840390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.840526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.840558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.840677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.840709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.840883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.840917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.841090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.841121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.841239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.841271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.841449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.841483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.841650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.841682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.841862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.841895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.842001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.842033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.842203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.842236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.842481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.842516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.842631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.842663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.842774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.842807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.842998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.843031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.843206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.843238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.843433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.843466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.843571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.843604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.843809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.843841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.844047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.844080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.844197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.844230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.844342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.844385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.844655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.844688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.844872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.844906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.845084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.845115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.845303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.845335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.809 qpair failed and we were unable to recover it. 00:27:13.809 [2024-11-17 14:37:02.845521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.809 [2024-11-17 14:37:02.845556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.845777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.845810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.846052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.846085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.846299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.846332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.846554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.846586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.846769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.846801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.847041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.847074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.847203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.847236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.847403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.847437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.847559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.847593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.847717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.847750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.847866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.847898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.848021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.848053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.848291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.848324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.848569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.848612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.848787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.848820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.849028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.849061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.849256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.849289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.849473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.849507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.849632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.849666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.849780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.849812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.849924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.849957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.850083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.850117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.850379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.850413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.850609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.850641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.850776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.850809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.850997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.851030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.851242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.851274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.851446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.851479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.851670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.851703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.851876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.851908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.852089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.852121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.852308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.852342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.852618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.852651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.852782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.810 [2024-11-17 14:37:02.852815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.810 qpair failed and we were unable to recover it. 00:27:13.810 [2024-11-17 14:37:02.852920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.852952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.853148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.853180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.853346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.853389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.853510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.853544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.853716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.853749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.853921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.853953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.854133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.854166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.854372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.854406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.854534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.854566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.854761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.854794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.855020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.855053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.855293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.855326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.855535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.855569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.855810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.855843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.855968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.856000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.856187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.856220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.856412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.856447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.856585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.856618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.856736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.856770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.856953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.856992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.857194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.857226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.857349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.857410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.857647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.857681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.857874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.857906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.858091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.858125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.858243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.858275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.858476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.858511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.858694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.858726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.858850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.858882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.859087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.859121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.859401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.859436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.859633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.859667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.859780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.859813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.859925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.859956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.860166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.860198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.860382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.811 [2024-11-17 14:37:02.860417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.811 qpair failed and we were unable to recover it. 00:27:13.811 [2024-11-17 14:37:02.860527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.860560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.860737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.860769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.860914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.860947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.861143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.861175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.861308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.861341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.861478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.861509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.861611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.861643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.861762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.861795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.861902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.861933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.862191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.862223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.862360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.862395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.862529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.862562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.862742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.862774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.862907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.862940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.863071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.863104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.863345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.863388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.863564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.863597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.863865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.863898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.864074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.864106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.864360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.864394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.864510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.864543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.864665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.864698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.864873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.864907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.865008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.865045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.865227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.865259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.865394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.865429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.865636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.865668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.865837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.865869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.865986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.866019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.866211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.866243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.866426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.866459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.866567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.866600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.866787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.866818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.866936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.812 [2024-11-17 14:37:02.866969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.812 qpair failed and we were unable to recover it. 00:27:13.812 [2024-11-17 14:37:02.867167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.867199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.867314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.867346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.867487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.867518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.867664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.867697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.867870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.867904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.868097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.868128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.868242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.868273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.868413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.868446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.868622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.868656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.868849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.868882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.869006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.869039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.869225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.869257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.869385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.869418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.869657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.869691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.869809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.869842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.870030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.870062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.870266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.870299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.870419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.870452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.870623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.870656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.870822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.870854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.871091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.871125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.871311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.871343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.871533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.871566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.871685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.871718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.871842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.871875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.872120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.872152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.872370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.872404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.872595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.872627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.872763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.872795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.873000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.873038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.873227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.873259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.873389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.873422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.873544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.873577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.813 [2024-11-17 14:37:02.873683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.813 [2024-11-17 14:37:02.873715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.813 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.873893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.873926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.874111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.874144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.874263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.874293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.874515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.874548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.874720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.874752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.874940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.874973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.875181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.875215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.875328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.875368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.875571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.875603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.875795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.875828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.876034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.876067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.876177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.876210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.876314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.876346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.876486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.876518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.876751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.876784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.876975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.877008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.877190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.877223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.877426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.877461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.877586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.877619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.877744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.877777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.878015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.878048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.878226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.878260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.878483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.878518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.878650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.878682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.878813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.878845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.878969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.879003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.879107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.879139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.879312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.879345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.814 qpair failed and we were unable to recover it. 00:27:13.814 [2024-11-17 14:37:02.879568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.814 [2024-11-17 14:37:02.879599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.879782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.879814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.879917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.879951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.880069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.880102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.880337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.880376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.880500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.880532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.880711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.880744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.880971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.881010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.881181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.881215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.881409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.881443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.881553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.881586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.881711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.881744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.881945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.881978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.882085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.882117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.882286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.882319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.882433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.882467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.882669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.882702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.882964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.882997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.883113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.883145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.883319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.883363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.883477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.883510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.883635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.883668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.883844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.883876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.884006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.884040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.884235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.884267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.884384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.884419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.884593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.884626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.884830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.884863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.885108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.885141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.885254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.885287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.885472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.885507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.885680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.885713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.885818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.885850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.886093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.886127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.886265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.886298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.815 qpair failed and we were unable to recover it. 00:27:13.815 [2024-11-17 14:37:02.886528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.815 [2024-11-17 14:37:02.886562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.886679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.886711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.886919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.886952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.887076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.887110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.887217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.887249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.887377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.887410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.887616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.887649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.887823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.887856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.888056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.888089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.888268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.888301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.888517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.888550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.888727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.888760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.888930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.888973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.889089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.889121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.889301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.889335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.889539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.889572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.889744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.889776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.889959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.889991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.890092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.890125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.890236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.890268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.890463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.890498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.890617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.890650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.890828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.890860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.891032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.891064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.891194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.891228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.891344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.891387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.891610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.891644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.891745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.891778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.891976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.892010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.892197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.892231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.892470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.892504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.892725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.892758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.892893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.892926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.893153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.893185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.893306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.893339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.893534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.893567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.893680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.816 [2024-11-17 14:37:02.893712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.816 qpair failed and we were unable to recover it. 00:27:13.816 [2024-11-17 14:37:02.893885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.893918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.894184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.894216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.894449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.894521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.894716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.894755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.894936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.894969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.895092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.895124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.895315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.895349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.895492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.895525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.895660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.895691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.895886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.895917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.896096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.896129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.896325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.896370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.896482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.896515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.896646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.896678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.896850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.896884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.897011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.897050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.897235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.897269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.897457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.897492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.897661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.897694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.897885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.897918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.898033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.898064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.898247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.898281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.898526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.898560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.898735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.898767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.898955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.898988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.899116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.899148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.899319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.899360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.899505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.899537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.899772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.899805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.899995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.900028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.900132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.900165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.900339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.900382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.900621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.900654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.900776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.817 [2024-11-17 14:37:02.900809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.817 qpair failed and we were unable to recover it. 00:27:13.817 [2024-11-17 14:37:02.900937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.900970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.901156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.901188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.901396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.901431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.901615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.901647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.901884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.901918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.902024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.902055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.902172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.902204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.902417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.902451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.902688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.902761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.902992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.903029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.903145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.903179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.903381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.903416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.903545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.903579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.903756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.903788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.903961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.903992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.904106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.904137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.904309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.904342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.904539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.904571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.904741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.904773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.904888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.904921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.905049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.905081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.905252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.905283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.905430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.905462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.905580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.905611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.905852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.905886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.906058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.906090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.906188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.906218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.906323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.906364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.906482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.906513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.906695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.906726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.906904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.906935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.907047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.907080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.907213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.907244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.907434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.818 [2024-11-17 14:37:02.907468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.818 qpair failed and we were unable to recover it. 00:27:13.818 [2024-11-17 14:37:02.907575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.907607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.907723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.907759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.907957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.907990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.908255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.908287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.908409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.908442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.908639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.908670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.908859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.908892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.909076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.909107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.909286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.909316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.909445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.909476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.909581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.909612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.909809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.909840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.909956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.909988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.910101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.910131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.910315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.910346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.910549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.910581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.910693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.910725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.910892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.910923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.911037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.911067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.911240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.911271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.911440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.911474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.911661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.911691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.911811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.911842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.911943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.911974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.912109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.912138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.912271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.912303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.912468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.912501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.912741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.912773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.912898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.912933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.819 [2024-11-17 14:37:02.913112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.819 [2024-11-17 14:37:02.913144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.819 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.913453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.913487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.913596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.913628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.913759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.913790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.913899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.913927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.914116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.914148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.914289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.914320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.914516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.914549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.914737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.914769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.914945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.914978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.915247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.915279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.915388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.915422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.915603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.915636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.915818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.915850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.916020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.916051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.916236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.916268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.916451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.916484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.916614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.916646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.916835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.916866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.917103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.917133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.917258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.917289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.917415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.917448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.917628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.917660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.917831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.917861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.917980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.918011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.918120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.918150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.918342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.918387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.918503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.918536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.918720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.918752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.918944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.918974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.919154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.919185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.919397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.919429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.919619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.919649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.919835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.919867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.920041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.920073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.920271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.920304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.920444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.920475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.920673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.820 [2024-11-17 14:37:02.920704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.820 qpair failed and we were unable to recover it. 00:27:13.820 [2024-11-17 14:37:02.920825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.920857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.921052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.921084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.921306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.921338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.921542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.921575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.921697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.921728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.921970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.922003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.922174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.922205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.922379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.922411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.922578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.922610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.922725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.922755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.922996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.923028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.923142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.923173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.923367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.923401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.923512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.923543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.923779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.923812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.923938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.923969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.924164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.924196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.924473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.924506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.924691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.924722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.924962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.924994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.925175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.925206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.925377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.925408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.925618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.925649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.925844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.925874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.926000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.926032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.926243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.926274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.926527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.926562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.926752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.926784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.926956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.926987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.927167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.927200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.927385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.927419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.927601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.927632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.927768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.927800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.821 [2024-11-17 14:37:02.927930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.821 [2024-11-17 14:37:02.927961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.821 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.928076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.928107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.928362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.928395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.928508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.928540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.928715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.928747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.928864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.928895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.929039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.929070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.929178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.929206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.929395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.929429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.929603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.929635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.929763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.929797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.929966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.929998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.930177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.930207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.930322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.930360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.930549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.930581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.930752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.930784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.930959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.930992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.931175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.931205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.931388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.931420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.931549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.931581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.931786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.931819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.932009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.932041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.932142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.932173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.932299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.932336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.932463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.932496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.932607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.932636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.932810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.932842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.932948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.932978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.933160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.933193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.933393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.933427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.933667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.933698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.933886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.933919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.934036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.934068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.934183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.934213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.934326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.822 [2024-11-17 14:37:02.934366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.822 qpair failed and we were unable to recover it. 00:27:13.822 [2024-11-17 14:37:02.934485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.934517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.934780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.934811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.934943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.934975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.935148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.935180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.935309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.935339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.935539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.935569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.935769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.935801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.935971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.936002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.936118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.936148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.936334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.936374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.936572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.936603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.936738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.936770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.937014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.937047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.937168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.937198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.937318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.937378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.937595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.937632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.937768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.937799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.938056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.938089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.938295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.938327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.938580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.938613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.938739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.938772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.938904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.938935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.939178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.939211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.939333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.939375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.939490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.939522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.939725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.939757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.939948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.939979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.940221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.940253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.940433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.940466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.940739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.940772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.940938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.940969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.941095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.941126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.941313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.941345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.941560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.941592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.941782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.941814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.823 qpair failed and we were unable to recover it. 00:27:13.823 [2024-11-17 14:37:02.941932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.823 [2024-11-17 14:37:02.941964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.942162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.942193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.942398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.942432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.942539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.942571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.942737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.942769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.942937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.942969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.943153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.943184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.943390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.943422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.943627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.943659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.943833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.943864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.944112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.944144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.944326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.944367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.944476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.944507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.944675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.944706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.944882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.944914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.945100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.945133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.945244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.945276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.945447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.945482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.945588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.945620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.945738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.945770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.945896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.945928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.946232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.946305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.946588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.946626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.946752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.946784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.946897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.946930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.947041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.947072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.947207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.947239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.947419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.947453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.947622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.947652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.947761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.947793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.947921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.947953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.948140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.948171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.824 [2024-11-17 14:37:02.948414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.824 [2024-11-17 14:37:02.948447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.824 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.948616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.948648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.948861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.948908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.949034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.949067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.949302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.949333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.949533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.949566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.949790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.949822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.950071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.950102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.950284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.950315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.950439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.950472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.950660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.950691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.950895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.950927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.951167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.951198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.951440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.951472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.951668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.951700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.951908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.951940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.952241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.952272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.952517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.952550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.952806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.952839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.952961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.952992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.953162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.953195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.953430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.953462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.953634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.953665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.953781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.953812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.954000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.954032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.954209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.954240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.954409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.954441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.954676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.954708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.954816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.954848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.955089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.955120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.955325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.955368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.955554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.955585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.955781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.955812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.956049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.825 [2024-11-17 14:37:02.956080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.825 qpair failed and we were unable to recover it. 00:27:13.825 [2024-11-17 14:37:02.956217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.956248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.956425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.956458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.956587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.956618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.956787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.956819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.956990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.957021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.957206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.957237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.957446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.957480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.957683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.957715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.957822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.957859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.958068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.958098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.958280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.958311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.958529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.958563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.958671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.958702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.958936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.958968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.959203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.959235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.959438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.959472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.959610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.959642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.959812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.959844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.960022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.960054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.960235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.960267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.960398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.960432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.960668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.960701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.960876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.960908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.961079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.961112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.961214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.961246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.961376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.961409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.961528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.961562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.961739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.961771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.961939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.961972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.962079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.962112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.962303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.962334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.962578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.962611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.962787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.962819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.962990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.963022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.963208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.963240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.963485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.963518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.826 [2024-11-17 14:37:02.963728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.826 [2024-11-17 14:37:02.963761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.826 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.963947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.963979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.964216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.964248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.964471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.964505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.964625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.964657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.964763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.964796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.965042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.965074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.965333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.965390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.965585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.965617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.965861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.965892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.966077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.966109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.966230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.966263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.966395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.966434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.966637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.966670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.966841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.966873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.967130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.967162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.967270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.967303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.967484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.967518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.967681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.967711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.967891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.967923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.968089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.968120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.968251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.968281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.968491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.968523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.968759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.968791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.968965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.968997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.969126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.969159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.969345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.969406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.969587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.969620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.969803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.969835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.969941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.969972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.970074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.970107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.970210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.970240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.970364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.970398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.970575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.970607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.970788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.970820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.970999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.971032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.971166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.971198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.971399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.827 [2024-11-17 14:37:02.971432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.827 qpair failed and we were unable to recover it. 00:27:13.827 [2024-11-17 14:37:02.971650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.971682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.971862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.971896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.972080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.972112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.972305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.972338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.972538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.972572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.972759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.972791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.972920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.972953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.973066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.973099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.973205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.973237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.973408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.973443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.973624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.973656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.973824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.973856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.974024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.974057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.974241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.974274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.974454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.974494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.974683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.974715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.974896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.974928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.975049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.975082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.975262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.975294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.975472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.975505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.975686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.975718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.975903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.975935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.976055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.976087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.976216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.976248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.976516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.976549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.976749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.976780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.976965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.976997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.977168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.977201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.977404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.977438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.977620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.977652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.977783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.977816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.977995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.978028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.978142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.978174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.978363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.978396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.978571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.978604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.978774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.978806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.828 [2024-11-17 14:37:02.978976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.828 [2024-11-17 14:37:02.979008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.828 qpair failed and we were unable to recover it. 00:27:13.829 [2024-11-17 14:37:02.979193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.829 [2024-11-17 14:37:02.979226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.829 qpair failed and we were unable to recover it. 00:27:13.829 [2024-11-17 14:37:02.979350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.829 [2024-11-17 14:37:02.979391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.829 qpair failed and we were unable to recover it. 00:27:13.829 [2024-11-17 14:37:02.979522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.829 [2024-11-17 14:37:02.979554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.829 qpair failed and we were unable to recover it. 00:27:13.829 [2024-11-17 14:37:02.979673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.829 [2024-11-17 14:37:02.979706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.829 qpair failed and we were unable to recover it. 00:27:13.829 [2024-11-17 14:37:02.979824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.829 [2024-11-17 14:37:02.979861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.829 qpair failed and we were unable to recover it. 00:27:13.829 [2024-11-17 14:37:02.979980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.829 [2024-11-17 14:37:02.980013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:13.829 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.980252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.980284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.980406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.980439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.980558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.980591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.980774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.980807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.980929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.980961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.981149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.981181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.981425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.981459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.981694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.981727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.981898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.981930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.982119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.982151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.982274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.982306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.982583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.982616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.982906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.982938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.983123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.983155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.983327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.983368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.983549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.983581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.983711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.983743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.983923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.983956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.984136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.984168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.984340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.984381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.984507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.984540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.984718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.984750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.984947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.984980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.985184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.985216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.985322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.985372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.985560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.985592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.985764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.985796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.985982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.986014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.986134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.986167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.986404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.986438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.986620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.986653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.986825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.986857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.986975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.987007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-17 14:37:02.987122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-17 14:37:02.987154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.987335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.987375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.987495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.987526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.987776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.987807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.988048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.988080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.988210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.988246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.988488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.988521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.988747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.988779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.988943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.988974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.989173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.989204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.989447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.989480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.989685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.989716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.989846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.989877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.990054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.990087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.990255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.990287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.990409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.990443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.990575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.990609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.990786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.990819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.991055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.991088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.991296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.991329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.991548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.991582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.991751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.991783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.991964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.991996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.992213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.992245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.992364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.992398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.992568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.992601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.992839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.992872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.993112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.993144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.993265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.993297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.993447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.993479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.993755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.993787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.994025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.994057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.994259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.994292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.994489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.994523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.994698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.994730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.994838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.994870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.995045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.995077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.995202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.995235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-17 14:37:02.995432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-17 14:37:02.995466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:02.995599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:02.995632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:02.995833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:02.995866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:02.995995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:02.996027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:02.996164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:02.996197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:02.996436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:02.996469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:02.996670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:02.996703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:02.996907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:02.996944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:02.997113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:02.997146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:02.997332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:02.997374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:02.997582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:02.997614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:02.997737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:02.997767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:02.998032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:02.998062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:02.998249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:02.998279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:02.998452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:02.998485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:02.998677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:02.998710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:02.998892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:02.998924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:02.999039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:02.999071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:02.999240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:02.999271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:02.999459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:02.999493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:02.999758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:02.999791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:02.999983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:03.000016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:03.000149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:03.000182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:03.000318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:03.000361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:03.000487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:03.000519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:03.000644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:03.000676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:03.000845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:03.000878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:03.001068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:03.001102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:03.001307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:03.001339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:03.001609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:03.001643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:03.001773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:03.001806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:03.002143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:03.002176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:03.002377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:03.002411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:03.002658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:03.002690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:03.002818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:03.002851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:03.003024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:03.003057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:03.003190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:03.003224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:03.003332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:03.003376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-17 14:37:03.003501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-17 14:37:03.003533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.003720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.003753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.003927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.003959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.004242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.004275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.004444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.004478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.004710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.004743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.005009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.005041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.005220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.005252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.005376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.005410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.005554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.005592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.005779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.005812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.005992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.006024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.006126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.006159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.006280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.006312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.006450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.006485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.006658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.006691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.006817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.006849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.006966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.006999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.007185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.007218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.007503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.007537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.007802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.007834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.007960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.007993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.008183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.008217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.008338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.008381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.008509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.008542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.008672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.008704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.008885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.008917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.009102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.009133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.009256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.009289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.009465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.009500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.009683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.009717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.009850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.009882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.010011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.010044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.010156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.010189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.010381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.010415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.010533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.010566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.010690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.010723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.010985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-17 14:37:03.011017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-17 14:37:03.011219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.011251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.011465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.011498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.011604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.011637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.011807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.011841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.012044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.012077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.012219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.012253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.012387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.012421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.012598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.012632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.012753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.012792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.012913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.012945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.013073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.013106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.013312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.013350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.013538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.013570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.013696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.013729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.013929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.013961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.014077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.014109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.014219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.014254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.014397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.014431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.014694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.014726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.014837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.014869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.014995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.015028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.015136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.015167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.015286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.015318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.015451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.015485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.015603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.015636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.015817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.015850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.015980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.016013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.016133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.016164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.016289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.016322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.016438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.016471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.016600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.016634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.016755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-17 14:37:03.016787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-17 14:37:03.016974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.017007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.017183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.017216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.017346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.017389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.017496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.017534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.017642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.017675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.017842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.017874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.018129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.018161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.018385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.018418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.018609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.018642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.018750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.018783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.018958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.018990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.019159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.019193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.019342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.019387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.019492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.019526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.019707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.019739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.019845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.019878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.019984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.020016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.020132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.020164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.020269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.020301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.020449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.020491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.020764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.020796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.020988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.021022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.021196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.021228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.021350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.021405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.021515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.021547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.021666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.021699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.021819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.021851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.021963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.021994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.022146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.022180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.022363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.022397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.022526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.022558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.022737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.022770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.023008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.023041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.023249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.023282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.023459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.023492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.023612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.023645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.023763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.023795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.023975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-17 14:37:03.024007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-17 14:37:03.024244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.024277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.024392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.024425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.024546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.024578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.024694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.024727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.024833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.024865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.025064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.025096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.025221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.025254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.025378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.025412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.025589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.025620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.025738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.025770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.025947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.025981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.026152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.026184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.026385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.026420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.026534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.026566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.026673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.026705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.026808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.026841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.027019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.027052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.027236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.027267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.027441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.027477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.027674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.027707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.027808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.027841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.028098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.028137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.028246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.028278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.028378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.028413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.028535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.028566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.028742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.028775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.028897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.028930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.029116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.029148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.029340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.029384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.029506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.029539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.029650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.029682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.029889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.029923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.030035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.030068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.030254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.030286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.030486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.030521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.030702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.030736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.030865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.030898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-17 14:37:03.031150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-17 14:37:03.031183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.031424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.031459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.031633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.031665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.031770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.031803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.031986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.032019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.032142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.032176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.032297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.032335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.032463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.032497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.032676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.032707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.032902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.032936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.033150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.033184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.033311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.033344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.033466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.033500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.033632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.033666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.033795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.033827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.034016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.034056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.034181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.034212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.034314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.034346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.034477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.034515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.034785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.034821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.034926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.034958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.035088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.035120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.035418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.035452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.035645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.035678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.035780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.035823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.035941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.035977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.036248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.036280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.036393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.036433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.036569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.036603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.036732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.036764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.036941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.036974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.037165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.037199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.037331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.037375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.037549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.037581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.037702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.037736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.037952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.037984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.038183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.038216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.038389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.038424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-17 14:37:03.038615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-17 14:37:03.038651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.038891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.038923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.039038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.039071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.039261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.039295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.039417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.039452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.039559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.039591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.039712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.039745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.039863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.039894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.040106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.040138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.040339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.040404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.040617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.040649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.040854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.040886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.041008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.041041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.041148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.041182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.041285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.041318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.041530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.041564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.041678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.041712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.041884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.041917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.042091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.042124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.042251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.042283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.042451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.042486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.042658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.042691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.042868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.042901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.043007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.043040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.043225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.043258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.043384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.043418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.043542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.043580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.043701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.043734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.043869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.043903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.044034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.044066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.044182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.044216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.044458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.044492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.044598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.044632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.044743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.044776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.044991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.045024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.045148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.045180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.045301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.045333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.045516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.045547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-17 14:37:03.045729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-17 14:37:03.045761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.045899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.045931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.046055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.046088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.046278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.046311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.046493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.046527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.046725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.046757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.046870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.046903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.047120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.047152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.047325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.047370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.047563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.047596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.047775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.047807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.047925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.047957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.048081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.048114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.048230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.048262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.048401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.048436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.048603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.048674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.048837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.048874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.049076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.049109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.049225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.049259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.049378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.049418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.049538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.049569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.049749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.049781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.049959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.049991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.050182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.050215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.050318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.050350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.050475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.050507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.050621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.050653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.050890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.050922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.051091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.051132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.051371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.051404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.051531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.051563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.051673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.051706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.051895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.051926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-17 14:37:03.052120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-17 14:37:03.052151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.052344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.052390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.052511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.052544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.052672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.052704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.052814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.052847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.052966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.052997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.053117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.053150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.053285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.053317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.053503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.053537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.053684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.053715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.053923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.053955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.054068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.054100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.054223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.054255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.054372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.054407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.054513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.054545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.054737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.054769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.054879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.054911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.055034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.055065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.055231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.055263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.055381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.055416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.055612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.055643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.055823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.055855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.056034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.056105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.056303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.056340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.056483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.056516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.056693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.056725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.056907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.056940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.057151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.057184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.057375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.057409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.057523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.057557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.057673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.057707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.057831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.057862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.057963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.057996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.058112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.058145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.058325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.058368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.058493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.058526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.058652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.058684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.058799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.058831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-17 14:37:03.058943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-17 14:37:03.058976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.059223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.059256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.059450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.059485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.059663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.059695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.059897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.059930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.060036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.060067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.060246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.060277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.060393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.060426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.060619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.060651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.060768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.060799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.060933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.060965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.061160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.061204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.061310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.061342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.061461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.061492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.061598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.061629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.061820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.061853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.061958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.061990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.062176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.062207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.062394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.062426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.062546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.062577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.062751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.062785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.062907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.062938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.063060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.063091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.063335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.063381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.063551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.063583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.063784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.063816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.063929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.063960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.064169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.064200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.064393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.064429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.064549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.064581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.064760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.064790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.064902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.064933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.065101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.065133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.065265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.065297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.065429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.065461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.065590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.065623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.065795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.065826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.066364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.066405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.066587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-17 14:37:03.066633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-17 14:37:03.066878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.066912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.067177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.067210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.067391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.067425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.067537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.067570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.067676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.067707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.067832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.067865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.067989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.068021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.068150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.068181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.068308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.068342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.068479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.068510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.068619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.068652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.068778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.068809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.068924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.068957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.069138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.069171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.069294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.069325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.069531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.069562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.069678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.069710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.069891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.069923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.070101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.070134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.070257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.070288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.070400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.070435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.070544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.070574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.070762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.070794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.070907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.070940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.071128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.071161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.071341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.071383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.071499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.071538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.071644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.071678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.071797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.071828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.071995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.072029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.072282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.072314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.072505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.072539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.072736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.072768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.072875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.072907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.073082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.073113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.073230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.073263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.073433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.073468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.073594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-17 14:37:03.073625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-17 14:37:03.073758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.073791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.074053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.074084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.074294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.074327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.074464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.074495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.074622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.074652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.074754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.074787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.074971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.075001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.075119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.075150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.075338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.075384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.075556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.075589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.075703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.075733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.075969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.076002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.076118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.076149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.076407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.076442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.076560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.076592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.076698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.076731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.076907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.076940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.077058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.077090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.077217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.077248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.077463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.077496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.077702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.077734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.077850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.077882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.078059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.078091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.078201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.078233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.078366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.078399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.078501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.078532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.078708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.078740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.078914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.078946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.079118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.079149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.079269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.079300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.079490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.079524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.079705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.079737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.079841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.079872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.080000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.080032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.114 [2024-11-17 14:37:03.080216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.114 [2024-11-17 14:37:03.080248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.114 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.080372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.080406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.080647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.080678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.080942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.080976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.081093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.081124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.081233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.081266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.081395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.081429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.081694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.081726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.081911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.081943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.082072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.082104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.082290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.082322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.082509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.082543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.082712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.082744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.082856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.082887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.083000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.083032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.083136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.083168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.083270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.083303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.083417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.083450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.083569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.083603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.083721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.083753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.083858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.083890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.084125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.084158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.084267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.084304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.084470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.084502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.084612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.084645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.084768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.084800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.085002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.085033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.085138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.085170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.085372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.085406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.085540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.085571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.085749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.085781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.085882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.085914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.086098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.086130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.086309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.086340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.086457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.086490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.086677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.086711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.086848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.086881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.087127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.087158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.087277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.115 [2024-11-17 14:37:03.087309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.115 qpair failed and we were unable to recover it. 00:27:14.115 [2024-11-17 14:37:03.087512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.087547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.087719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.087750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.087861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.087893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.088017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.088047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.088216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.088249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.088364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.088396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.088530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.088561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.088745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.088778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.088963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.088997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.089173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.089205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.089449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.089490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.089596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.089629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.089809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.089841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.090013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.090045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.090177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.090207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.090328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.090372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.090547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.090578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.090749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.090780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.090887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.090920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.091121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.091154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.091258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.091290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.091471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.091508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.091640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.091674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.091839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.091871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.092048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.092079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.092192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.092225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.092395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.092428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.092605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.092636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.092834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.092867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.092991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.093023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.093205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.093237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.093417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.093450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.093631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.093661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.093784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.093815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.093927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.093958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.094169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.094199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.094302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.094333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.094484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.116 [2024-11-17 14:37:03.094518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.116 qpair failed and we were unable to recover it. 00:27:14.116 [2024-11-17 14:37:03.094717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.094749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.094864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.094895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.095004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.095037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.095157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.095189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.095313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.095346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.095466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.095497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.095683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.095716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.095939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.095972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.096094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.096128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.096372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.096406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.096576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.096609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.096849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.096881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.096997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.097029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.097136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.097168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.097298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.097329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.097601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.097634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.097825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.097858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.098031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.098062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.098230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.098261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.098373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.098408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.098523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.098556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.098738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.098772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.098883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.098915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.099116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.099149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.099255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.099285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.099549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.099584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.099762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.099795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.099906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.099938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.100117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.100148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.100362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.100394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.100516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.100548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.100656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.100688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.100946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.100979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.101157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.101188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.101397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.101430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.101608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.101639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.101764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.101797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.102009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.102041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.117 [2024-11-17 14:37:03.102283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.117 [2024-11-17 14:37:03.102316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.117 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.102430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.102463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.102642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.102680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.102797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.102829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.102954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.102985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.103112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.103145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.103274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.103306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.103502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.103535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.103667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.103698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.103812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.103844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.104019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.104051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.104300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.104333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.104590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.104623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.104744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.104775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.104895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.104927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.105116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.105148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.105263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.105295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.105478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.105511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.105616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.105649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.105822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.105855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.105976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.106007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.106134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.106166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.106350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.106393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.106567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.106600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.106785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.106816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.107014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.107047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.107184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.107218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.107347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.107390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.107579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.107611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.107832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.107867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.107988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.108020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.108259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.108292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.108471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.108505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.108629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.108661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.108835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.108865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.109067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.109099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.109373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.109406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.109513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.109545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.109660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.118 [2024-11-17 14:37:03.109693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-11-17 14:37:03.109858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.109890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.110074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.110106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.110285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.110318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.110606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.110640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.110816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.110848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.110980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.111012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.111200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.111231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.111338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.111383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.111563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.111596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.111766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.111798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.112024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.112056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.112244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.112275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.112483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.112516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.112707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.112738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.112922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.112955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.113136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.113169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.113377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.113410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.113583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.113622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.113835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.113866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.114045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.114076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.114208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.114239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.114502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.114536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.114635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.114668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.114786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.114818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.115013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.115046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.115232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.115263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.115429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.115462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.115634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.115665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.115856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.115888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.116062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.116093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.116264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.116296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.116534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.116568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.116735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.116767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-11-17 14:37:03.116953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.119 [2024-11-17 14:37:03.116984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.117174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.117204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.117389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.117421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.117550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.117580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.117756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.117789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.117911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.117943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.118128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.118160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.118262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.118293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.118496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.118529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.118650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.118681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.118882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.118913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.119080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.119110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.119248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.119280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.119470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.119502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.119618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.119648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.119783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.119813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.119993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.120025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.120211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.120243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.120414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.120445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.120684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.120713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.120832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.120860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.120978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.121007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.121172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.121199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.121308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.121336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.121454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.121484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.121669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.121700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.121881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.121909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.122113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.122141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.122259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.122287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.122495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.122526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.122701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.122731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.123015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.123043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.123168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.123196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.123308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.123336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.123461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.123489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.123750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.123779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.123917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.123945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.124185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.120 [2024-11-17 14:37:03.124216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.120 qpair failed and we were unable to recover it. 00:27:14.120 [2024-11-17 14:37:03.124340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.124378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.124498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.124528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.124745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.124772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.124958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.124987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.125127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.125156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.125327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.125363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.125482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.125512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.125747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.125777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.125894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.125923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.126022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.126052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.126243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.126272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.126447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.126477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.126664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.126693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.126813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.126843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.127024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.127059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.127264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.127293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.127507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.127538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.127653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.127682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.127907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.127936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.128102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.128133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.128319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.128348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.128604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.128635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.128819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.128849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.128951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.128981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.129112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.129142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.129274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.129303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.129441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.129471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.129654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.129684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.129926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.129956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.130073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.130103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.130274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.130303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.130443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.130475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.130644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.130673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.130856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.130885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.131007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.131036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.131212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.131241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.131365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.131396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.131589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.131619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.121 [2024-11-17 14:37:03.131740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.121 [2024-11-17 14:37:03.131770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.121 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.131944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.131974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.132163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.132191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.132413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.132451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.132643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.132672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.132794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.132824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.133003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.133033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.133136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.133166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.133359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.133390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.133571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.133602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.133719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.133748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.133856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.133886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.134057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.134086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.134292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.134321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.134507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.134537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.134717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.134747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.134917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.134947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.135132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.135162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.135275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.135304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.135494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.135525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.135626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.135656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.135834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.135864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.135983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.136013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.136137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.136167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.136285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.136315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.136452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.136483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.136673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.136703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.136829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.136859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.137094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.137124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.137294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.137324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.137526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.137556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.137703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.137732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.137971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.138000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.138167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.138196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.138374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.138405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.138592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.138622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.138791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.138820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.139037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.139065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.139178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.122 [2024-11-17 14:37:03.139207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.122 qpair failed and we were unable to recover it. 00:27:14.122 [2024-11-17 14:37:03.139403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.139435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.139539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.139569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.139739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.139768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.139886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.139915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.140105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.140134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.140392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.140423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.140543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.140576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.140888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.140918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.141022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.141051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.141237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.141266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.141470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.141500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.141671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.141699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.141873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.141903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.142011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.142039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.142322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.142362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.142624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.142655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.142823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.142853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.143021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.143050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.143223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.143252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.143439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.143471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.143661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.143690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.143804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.143834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.144014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.144045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.144147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.144176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.144380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.144412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.144536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.144566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.144805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.144835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.144938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.144968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.145099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.145128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.145253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.145282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.145453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.145484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.145652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.145686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.145805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.145845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.145962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.145991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.146201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.146232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.123 [2024-11-17 14:37:03.146357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.123 [2024-11-17 14:37:03.146388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.123 qpair failed and we were unable to recover it. 00:27:14.124 [2024-11-17 14:37:03.146524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.124 [2024-11-17 14:37:03.146553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.124 qpair failed and we were unable to recover it. 00:27:14.124 [2024-11-17 14:37:03.146789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.124 [2024-11-17 14:37:03.146818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.124 qpair failed and we were unable to recover it. 00:27:14.124 [2024-11-17 14:37:03.147055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.124 [2024-11-17 14:37:03.147085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.124 qpair failed and we were unable to recover it. 00:27:14.124 [2024-11-17 14:37:03.147264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.124 [2024-11-17 14:37:03.147295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.124 qpair failed and we were unable to recover it. 00:27:14.124 [2024-11-17 14:37:03.147432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.124 [2024-11-17 14:37:03.147464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.124 qpair failed and we were unable to recover it. 00:27:14.124 [2024-11-17 14:37:03.147580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.124 [2024-11-17 14:37:03.147610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.124 qpair failed and we were unable to recover it. 00:27:14.124 [2024-11-17 14:37:03.147729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.124 [2024-11-17 14:37:03.147758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.124 qpair failed and we were unable to recover it. 00:27:14.124 [2024-11-17 14:37:03.147877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.394 [2024-11-17 14:37:03.445875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.394 qpair failed and we were unable to recover it. 00:27:14.394 [2024-11-17 14:37:03.446184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.394 [2024-11-17 14:37:03.446227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.394 qpair failed and we were unable to recover it. 00:27:14.394 [2024-11-17 14:37:03.446462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.394 [2024-11-17 14:37:03.446498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.394 qpair failed and we were unable to recover it. 00:27:14.394 [2024-11-17 14:37:03.446717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.394 [2024-11-17 14:37:03.446750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.394 qpair failed and we were unable to recover it. 00:27:14.394 [2024-11-17 14:37:03.446965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.394 [2024-11-17 14:37:03.446996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.394 qpair failed and we were unable to recover it. 00:27:14.394 [2024-11-17 14:37:03.447206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.394 [2024-11-17 14:37:03.447238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.394 qpair failed and we were unable to recover it. 00:27:14.394 [2024-11-17 14:37:03.447480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.394 [2024-11-17 14:37:03.447516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.394 qpair failed and we were unable to recover it. 00:27:14.394 [2024-11-17 14:37:03.447710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.394 [2024-11-17 14:37:03.447743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.394 qpair failed and we were unable to recover it. 00:27:14.394 [2024-11-17 14:37:03.447854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.394 [2024-11-17 14:37:03.447886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.448010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.448043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.448226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.448259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.448439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.448474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.448715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.448748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.448887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.448919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.449129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.449161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.449339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.449397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.449526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.449565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.449785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.449818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.449950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.449983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.450186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.450218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.450409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.450445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.450553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.450585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.450689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.450721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.450861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.450893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.452265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.452319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.452548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.452583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.452725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.452756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.452891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.452925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.453044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.453076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.453212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.453244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.453439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.453474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.453601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.453634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.453888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.453920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.454048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.454080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.454192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.454224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.454338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.454384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.454520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.454553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.454794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.454826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.455001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.455033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.455157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.455189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.455306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.455338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.455538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.455570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.455683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.455715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.455846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.455886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.455997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.456028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.456159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-17 14:37:03.456192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-17 14:37:03.456381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.456416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.456604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.456636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.456828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.456860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.457035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.457067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.457260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.457292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.457474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.457507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.457698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.457729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.457902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.457935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.458105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.458138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.458242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.458273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.458389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.458423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.458600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.458673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.458831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.458868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.459063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.459097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.459376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.459410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.459534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.459567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.459696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.459729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.459860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.459892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.460064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.460096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.460316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.460349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.460554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.460586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.460704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.460735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.460947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.460979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.461100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.461133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.461247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.461288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.461434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.461467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.461657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.461689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.461879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.461911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.462025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.462057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.462187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.462219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.462348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.462393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.462519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.462550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.462655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.462687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.462880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.462911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.463044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.463076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.463208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.463239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.463389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.463424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-17 14:37:03.463604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-17 14:37:03.463636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.463765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.463797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.463916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.463947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.464138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.464168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.464368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.464403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.464516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.464549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.464655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.464685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.464803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.464834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.464949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.464979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.465189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.465220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.465395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.465429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.465554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.465587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.465700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.465730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.465833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.465863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.466023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.466094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.466268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.466339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.466499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.466536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.466785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.466818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.466935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.466970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.467088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.467121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.467238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.467271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.467393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.467428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.467536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.467567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.467689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.467721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.467838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.467870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.467975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.468007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.468110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.468141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.468317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.468453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.468582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.468615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.468785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.468817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.468989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.469023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.469230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.469263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.469505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.469538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.469644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.469677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.469783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.469814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.469996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.470028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.470143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.470175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.470294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-17 14:37:03.470326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-17 14:37:03.470447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.470479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.470593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.470631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.470798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.470829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.471070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.471103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.471232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.471265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.471381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.471414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.471529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.471562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.471751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.471782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.471893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.471930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.472104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.472135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.472308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.472339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.472472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.472506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.472766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.472797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.472899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.472930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.473115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.473148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.473329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.473372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.473538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.473612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.473762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.473798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.473915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.473948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.474146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.474181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.474392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.474427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.474629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.474663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.474863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.474895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.475077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.475109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.475233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.475265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.475394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.475429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.475623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.475655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.475781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.475814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.476010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.476042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.476232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.476263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.476541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.476575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.476764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.476796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.476968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.477001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.478375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.478428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.478627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.478662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.478836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.478869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.478996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.479028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.479147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.479181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-17 14:37:03.479371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-17 14:37:03.479405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.479526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.479558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.479676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.479710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.479919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.479952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.480144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.480178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.480293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.480333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.480530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.480563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.480760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.480792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.480974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.481006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.481121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.481153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.481331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.481374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.481498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.481531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.481668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.481702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.481943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.481975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.482085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.482118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.482249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.482280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.482421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.482456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.482565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.482597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.482703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.482735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.484079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.484130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.484367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.484403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.484591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.484624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.484831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.484863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.485043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.485075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.485295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.485327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.485536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.485568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.485682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.485714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.485889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.485921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.486040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.486072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.486289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.486321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.486536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.486570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.486692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.486723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-17 14:37:03.486841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-17 14:37:03.486881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.487004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.487036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.487168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.487200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.487313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.487345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.487539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.487572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.487834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.487866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.488038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.488072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.488203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.488236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.488411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.488446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.488623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.488656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.488768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.488799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.488989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.489020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.489200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.489232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.489346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.489386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.489579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.489611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.489847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.489879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.490004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.490035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.490155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.490187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.490305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.490338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.490568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.490601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.491906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.491957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.492164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.492196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.492387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.492420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.492620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.492652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.492835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.492867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.493036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.493068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.493186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.493218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.493402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.493442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.493563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.493596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.493727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.493759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.493994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.494026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.494195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.494227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.494409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.494442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.494636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.494669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.494876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.494908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.495023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.495055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.495249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.495281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.495399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.495431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-17 14:37:03.495699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-17 14:37:03.495731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.495911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.495945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.496220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.496251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.496450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.496523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.496706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.496777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.496933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.496969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.497158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.497190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.497304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.497336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.497536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.497569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.497675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.497705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.497823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.497855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.497969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.498000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.498107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.498140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.498264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.498295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.498429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.498463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.498638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.498671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.498845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.498885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.499093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.499125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.499296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.499328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.499519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.499551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.499745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.499776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.501147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.501201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.501430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.501466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.501658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.501689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.501833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.501865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.501994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.502026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.502158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.502189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.502373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.502406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.502532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.502565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.502673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.502704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.502974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.503007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.503208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.503241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.503413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.503445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.503576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.503608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.503715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.503747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.503873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.503905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.504110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.504142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.504379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.504412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-17 14:37:03.504656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-17 14:37:03.504687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.504880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.504912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.505048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.505079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.505187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.505218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.505458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.505492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.505723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.505795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.506054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.506091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.506238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.506271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.506403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.506438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.506554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.506585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.506779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.506811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.506988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.507020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.507211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.507243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.507430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.507465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.507587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.507619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.507796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.507828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.507960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.507992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.508157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.508188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.508313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.508362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.508482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.508516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.508716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.508747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.508957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.508990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.509203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.509235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.509371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.509404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.509529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.509560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.509687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.509717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.509837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.509868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.509968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.509998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.510190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.510222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.510349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.510392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.510535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.510568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.510820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.510852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.511053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.511085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.511200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.511232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.511468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.511501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.511689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.511721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.511913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.511944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.512113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.512144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-17 14:37:03.512314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-17 14:37:03.512345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.512477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.512509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.512633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.512663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.512775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.512807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.512981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.513013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.513117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.513147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.513277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.513308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.513482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.513555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.513770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.513812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.514011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.514049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.514222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.514254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.514382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.514415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.514586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.514617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.514798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.514831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.515004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.515035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.515271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.515303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.515451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.515484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.515599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.515631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.515749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.515780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.515960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.515991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.516093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.516132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.516298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.516330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.516470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.516504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.516732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.516765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.516885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.516916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.517030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.517062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.517236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.517267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.517444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.517478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.517639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.517671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.517920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.517951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.518087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.518119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.518225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.518256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.518523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.518554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.518674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.518705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.518824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-17 14:37:03.518857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-17 14:37:03.518960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.518992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.519095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.519127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.519317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.519349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.519600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.519632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.519804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.519836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.519943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.519975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.520081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.520113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.520237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.520268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.520389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.520422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.520553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.520585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.520781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.520813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.520948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.520980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.521200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.521271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.521534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.521607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.521742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.521777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.522011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.522045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.522173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.522206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.522319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.522363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.522484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.522518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.522628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.522660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.522901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.522933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.523049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.523081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.523290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.523322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.523453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.523488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.523602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.523639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.523832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.523864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.524047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.524078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.524492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.524524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.524725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.524758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.524936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.524968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.525138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.525170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.525350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.525396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.525514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.525546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.525802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.525834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.527173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.527225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.527531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.527566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-17 14:37:03.528872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-17 14:37:03.528921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.529246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.529279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.529418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.529452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.529732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.529764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.529953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.529985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.530106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.530138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.530258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.530289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.530482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.530514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.530686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.530717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.530832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.530865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.530983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.531015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.531144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.531176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.531381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.531413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.531532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.531563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.531753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.531786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.531960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.531992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.532236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.532274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.532392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.532425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.532552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.532584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.532702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.532734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.532850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.532882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.533065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.533096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.533272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.533304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.533493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.533527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.533658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.533690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.533894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.533925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.534104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.534136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.534249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.534281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.534468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.534501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.534607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.534639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.534892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.534925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.535099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.535131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.535320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.535364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.535559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.535591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.535764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.535796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.535968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.536000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.536166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.536195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.536301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.536330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-17 14:37:03.536462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-17 14:37:03.536492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.537874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.537923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.538147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.538179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.538289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.538322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.538571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.538605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.538877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.538906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.539088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.539118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.539294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.539323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.539522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.539552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.539731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.539760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.539956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.539986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.540163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.540192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.540298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.540328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.540533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.540563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.540679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.540709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.540827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.540856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.541052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.541082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.541251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.541281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.541465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.541505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.541764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.541793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.541906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.541936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.542044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.542072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.542174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.542203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.543427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.543472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.543671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.543701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.543955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.543985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.544172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.544201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.544436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.544466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.544576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.544605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.544778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.544807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.545049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.545079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.545204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.545232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.545431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.545462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.545655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.545686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.545855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.545887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.546054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.546086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.546204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.546236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.546478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.546510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-17 14:37:03.546682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-17 14:37:03.546713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.546882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.546914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.547137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.547168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.547295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.547326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.547539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.547572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.547759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.547790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.547906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.547938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.548141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.548173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.548301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.548333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.548454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.548487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.548689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.548721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.548888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.548919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.549090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.549123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.549415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.549452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.549583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.549616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.549806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.549838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.549961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.549994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.550262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.550294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.550445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.550478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.550742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.550774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.550900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.550938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.551121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.551153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.551284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.551316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.551535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.551567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.551741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.551773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.551880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.551911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.552067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.552098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.552218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.552249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.552376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.552408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.552584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.552616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.552753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.552785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.553022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.553053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.553191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.553222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.553346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.553389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.553526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.553558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.553730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.553762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.553936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.553968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.554149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-17 14:37:03.554180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-17 14:37:03.554311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.554343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.554471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.554503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.554680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.554712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.554938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.554969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.555090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.555121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.555229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.555260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.555374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.555409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.555647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.555678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.555785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.555816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.556061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.556094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.556303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.556335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.556529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.556561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.556670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.556701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.556855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.556887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.557003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.557034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.557144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.557176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.557291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.557323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.557439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.557471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.557670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.557702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.557940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.557972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.558169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.558200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.558381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.558414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.558607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.558644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.558832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.558864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.558993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.559025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.559143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.559175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.559435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.559469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.559589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.559622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.559736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.559767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.559898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.559930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.560045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.560077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.560210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.560241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.560421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.560455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-17 14:37:03.560637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-17 14:37:03.560669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.560778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.560809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.560997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.561029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.561237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.561270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.561472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.561505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.561687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.561719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.561830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.561863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.561979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.562011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.562120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.562153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.562276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.562308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.562482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.562517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.562714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.562745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.562852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.562884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.563067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.563099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.563202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.563234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.563367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.563401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.563644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.563717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.563853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.563889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.564002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.564034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.564152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.564184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.564382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.564417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.564523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.564555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.564733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.564765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.564871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.564902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.565077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.565109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.565292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.565324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.565445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.565478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.565722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.565754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.565867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.565899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.566026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.566066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.566187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.566218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.566348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.566388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.566522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.566554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.566671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.566704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.566815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.566846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.566955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.566987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.567160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.567192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-17 14:37:03.567301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-17 14:37:03.567332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.569104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.569162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.569452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.569489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.569676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.569709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.569829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.569860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.569977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.570009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.570194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.570226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.570331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.570373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.570507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.570538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.570663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.570695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.570813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.570845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.570953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.570985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.571090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.571123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.571307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.571338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.571490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.571523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.571711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.571743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.571927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.571959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.572150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.572182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.572308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.572341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.572539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.572572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.572680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.572712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.572825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.572857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.572959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.572990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.574320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.574395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.574597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.574630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.574894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.574927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.575051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.575083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.575258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.575289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.575426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.575460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.575582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.575613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.575788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.575820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.575932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.575964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.576142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.576181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.576312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.576343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.576539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.576572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.576756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.576788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.576917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.576949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.577090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.577122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-17 14:37:03.577296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-17 14:37:03.577327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.577535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.577568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.577707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.577739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.577860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.577889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.578002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.578031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.578154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.578183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.578297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.578326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.578574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.578645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.578861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.578899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.579179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.579212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.579334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.579380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.579537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.579570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.579673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.579704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.579972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.580004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.580185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.580217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.580388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.580422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.580603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.580636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.580743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.580776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.580949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.580981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.581171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.581203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.581485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.581517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.581705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.581738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.581857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.581889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.581998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.582030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.582200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.582232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.582417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.582449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.582652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.582684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.582875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.582907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.583116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.583148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.583334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.583375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.583507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.583539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.583666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.583699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.583828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.583860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.583963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.583994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.584098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.584135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.584307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.584339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.584568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.584600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.584781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.584813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-17 14:37:03.584986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-17 14:37:03.585017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.585206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.585237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.585415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.585448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.585626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.585657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.585788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.585820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.586084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.586116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.586229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.586260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.586388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.586422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.586590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.586622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.586736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.586769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.586886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.586918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.587034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.587066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.587322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.587363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.587553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.587585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.587707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.587738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.587922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.587954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.588069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.588101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.588281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.588313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.588517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.588549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.588668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.588701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.588827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.588857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.588962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.588994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.589108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.589140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.590871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.590928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.591233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.591268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.591459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.591493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.591615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.591647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.591767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.591798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.591971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.592003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.592126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.592158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.592274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.592306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.592445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.592477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.592668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.592701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.592804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.592837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.592959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.592990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.593174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.593206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.593335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.593376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.593495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-17 14:37:03.593527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-17 14:37:03.593633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.593666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.593856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.593888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.594122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.594154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.594332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.594372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.594492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.594525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.594716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.594748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.594855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.594886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.595059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.595091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.595270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.595302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.595490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.595523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.595656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.595688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.595814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.595846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.596026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.596059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.596241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.596273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.596383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.596417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.596586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.596618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.596734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.596766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.596897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.596930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.597102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.597134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.597260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.597291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.597413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.597447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.597709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.597741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.597860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.597891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.597994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.598026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.598251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.598284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.598405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.598443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.598566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.598598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.598732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.598764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.598956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.598987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.599100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.599132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.599242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.599274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.599459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.599492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.599606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-17 14:37:03.599638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-17 14:37:03.599757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.599789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.599905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.599937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.600046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.600078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.600182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.600215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.600385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.600419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.600536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.600568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.600695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.600727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.600844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.600876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.600999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.601031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.601166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.601198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.601299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.601331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.601531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.601563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.601675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.601706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.601939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.601971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.602093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.602126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.602252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.602283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.602535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.602568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.602670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.602702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.602883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.602914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.603089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.603122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.603230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.603261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.603402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.603437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.603558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.603590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.603779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-17 14:37:03.603811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-17 14:37:03.603981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.604013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.604129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.604161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.604281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.604313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.604447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.604480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.604615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.604647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.604817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.604849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.604957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.604989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.605194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.605226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.605464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.605504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.605619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.605651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.605775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.605807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.605980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.606011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.606119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.606151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.606322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.606361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.606489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.606527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.606657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.606690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.606811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.606842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.607007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.607039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.607144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.607175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.607283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.607315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.607435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.607469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.607678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.607709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.607834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.607866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.608046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.608078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.608182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.608213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.608386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.608418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.608540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.693 [2024-11-17 14:37:03.608571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.693 qpair failed and we were unable to recover it. 00:27:14.693 [2024-11-17 14:37:03.608687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.608718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.608825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.608856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.608960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.608992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.609167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.609198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.609377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.609410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.609580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.609611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.609738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.609769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.609881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.609912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.610032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.610064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.610186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.610217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.610406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.610439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.610681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.610714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.610828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.610859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.610966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.610998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.611132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.611164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.611271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.611302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.611544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.611577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.611695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.611727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.611831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.611862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.611969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.612001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.612198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.612230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.612497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.612534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.612653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.612684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.612806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.612837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.613023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.613054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.613233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.613264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.613451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.613484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.613657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.613688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.613869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.613901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.614022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.614055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.614166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.614197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.614376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.614408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.614600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.614636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.614761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.614794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.614997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.615029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.615139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.615172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.615341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.615411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.694 qpair failed and we were unable to recover it. 00:27:14.694 [2024-11-17 14:37:03.615517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.694 [2024-11-17 14:37:03.615548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.615663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.615695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.615883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.615915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.616030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.616061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.616169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.616200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.616300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.616332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.616520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.616552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.616666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.616697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.616810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.616842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.616948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.616979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.617163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.617195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.617306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.617338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.617476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.617508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.617689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.617721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.617950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.617982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.618106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.618138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.618319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.618363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.618576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.618609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.618714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.618745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.618893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.618925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.619057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.619088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.619263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.619294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.619422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.619459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.619651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.619682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.619783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.619820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.619960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.619992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.620110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.620141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.620312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.620345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.620465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.620497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.620667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.620699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.620872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.620903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.621040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.621072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.621193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.621225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.621367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.621401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.621573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.621605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.621787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.621819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.622031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.622064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.622188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.622220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.695 qpair failed and we were unable to recover it. 00:27:14.695 [2024-11-17 14:37:03.622480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.695 [2024-11-17 14:37:03.622513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.622666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.622698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.622867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.622898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.623068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.623100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.623272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.623303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.623417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.623450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.623629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.623661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.623784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.623816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.623949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.623981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.624178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.624210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.624344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.624384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.624595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.624627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.624820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.624852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.625044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.625077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.625179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.625210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.625397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.625430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.625615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.625647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.625851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.625884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.625983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.626012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.626138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.626170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.626280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.626312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.626455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.626489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.626684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.626716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.626843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.626874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.627059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.627090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.627258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.627288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.627472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.627518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.627774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.627806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.628049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.628081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.628190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.628222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.628412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.628445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.628634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.628665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.628777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.628810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.628918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.628950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.629072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.629104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.629218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.629250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.629381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.629415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.629635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.696 [2024-11-17 14:37:03.629667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.696 qpair failed and we were unable to recover it. 00:27:14.696 [2024-11-17 14:37:03.629874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.629906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.630106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.630138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.630266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.630298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.630589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.630626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.630756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.630788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.630914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.630947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.631058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.631089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.631200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.631233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.631408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.631443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.631627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.631659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.631761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.631793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.631976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.632008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.632126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.632157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.632270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.632303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.632420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.632452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.632561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.632593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.632768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.632800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.632921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.632952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.633122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.633154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.633338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.633380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.635171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.635228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.635435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.635471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.635598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.635632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.635745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.635776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.635945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.635977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.636216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.636248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.636368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.636401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.636506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.636537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.636640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.636679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.636856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.636888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.637016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.637048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.697 [2024-11-17 14:37:03.637148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.697 [2024-11-17 14:37:03.637180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.697 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.637297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.637329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.637458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.637490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.637602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.637634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.637804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.637836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.638076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.638109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.638228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.638259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.638383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.638417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.638552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.638584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.638758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.638790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.638925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.638955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.639068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.639097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.639271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.639300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.639475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.639505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.639627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.639657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.639758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.639787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.639895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.639925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.640048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.640078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.640245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.640277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.640403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.640437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.640687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.640722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.640839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.640869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.640971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.641001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.641165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.641195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.641312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.641341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.641545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.641574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.641711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.641741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.641923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.641951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.642116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.642146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.642246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.642276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.642399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.642431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.642549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.642578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.642700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.642730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.642836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.642866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.642983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.643015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.643126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.643158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.643347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.643388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.643506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.643544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.698 [2024-11-17 14:37:03.643655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.698 [2024-11-17 14:37:03.643687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.698 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.643925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.643958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.644129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.644161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.644362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.644396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.644554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.644587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.644707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.644737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.644848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.644877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.644998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.645028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.645154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.645196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.645322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.645363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.645483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.645515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.645623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.645655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.645858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.645887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.646088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.646119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.646289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.646319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.646514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.646545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.646655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.646684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.646799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.646828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.646926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.646954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.647058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.647087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.647246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.647276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.647452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.647485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.647593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.647623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.647796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.647826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.647930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.647959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.648056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.648086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.648302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.648396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.648531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.648568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.648762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.648796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.648989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.649023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.649128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.649161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.649271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.649303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.649435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.649470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.649590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.649622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.649736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.649769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.649956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.649989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.650119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.650152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.650256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.650288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.699 qpair failed and we were unable to recover it. 00:27:14.699 [2024-11-17 14:37:03.650529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.699 [2024-11-17 14:37:03.650564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.650679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.650711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.650905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.650939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.651050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.651082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.651318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.651349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.651482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.651515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.651635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.651667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.651902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.651933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.652052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.652083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.652257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.652291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.652412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.652445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.652630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.652663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.652782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.652815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.652997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.653029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.653150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.653182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.653304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.653338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.653473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.653506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.653616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.653648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.653765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.653796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.653981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.654013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.654140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.654173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.654343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.654387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.654499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.654532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.654722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.654753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.654854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.654885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.655068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.655100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.655285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.655318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.655501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.655534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.655737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.655769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.655895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.655929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.656170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.656202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.656388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.656422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.656627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.656660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.656843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.656874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.656983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.657014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.657189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.657222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.657325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.657363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.657481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.657514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.700 [2024-11-17 14:37:03.657687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.700 [2024-11-17 14:37:03.657720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.700 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.657824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.657856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.657970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.658000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.658120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.658154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.658270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.658307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.658555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.658588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.658781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.658813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.658940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.658973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.659157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.659189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.659310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.659342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.659476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.659508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.659611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.659644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.661035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.661088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.661308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.661342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.661597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.661631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.661905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.661938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.662121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.662154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.662270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.662301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.662445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.662480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.662595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.662627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.662740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.662772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.663014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.663046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.663176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.663208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.663399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.663433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.663543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.663575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.663701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.663732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.663938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.663970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.664142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.664174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.664370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.664405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.664524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.664555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.664739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.664770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.664869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.664907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.665085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.665117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.665343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.665384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.665625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.665657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.665828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.665859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.666042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.666074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.666185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.666216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.666336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.701 [2024-11-17 14:37:03.666374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.701 qpair failed and we were unable to recover it. 00:27:14.701 [2024-11-17 14:37:03.666494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.666526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.666642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.666672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.666846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.666878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.666996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.667027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.667128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.667159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.667274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.667306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.667446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.667480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.667598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.667628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.667872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.667905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.668009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.668040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.668149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.668180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.668310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.668342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.668473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.668505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.668683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.668711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.668899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.668930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.669106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.669138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.669261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.669294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.669441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.669473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.669607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.669638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.669883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.669915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.670097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.670141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.670262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.670290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.670410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.670439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.670627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.670656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.670752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.670780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.670913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.670941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.671117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.671145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.671252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.671280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.671458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.671492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.671615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.671646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.671755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.671787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.671906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.671938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.672123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.702 [2024-11-17 14:37:03.672154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.702 qpair failed and we were unable to recover it. 00:27:14.702 [2024-11-17 14:37:03.672376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.672449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.672628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.672695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.672894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.672930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.673044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.673077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.673261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.673293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.673485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.673518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.673708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.673740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.673859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.673891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.673996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.674027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.674138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.674171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.674365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.674399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.674522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.674554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.674724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.674757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.674875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.674916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.675493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.675538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.675661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.675692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.675954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.675987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.676227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.676260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.676407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.676442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.676676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.676708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.676894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.676928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.677137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.677168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.677341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.677382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.677503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.677534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.677647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.677678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.677794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.677826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.678021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.678052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.678241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.678274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.678493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.678527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.678647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.678678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.678816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.678847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.679037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.679070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.679244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.679277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.679401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.679434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.679539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.679570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.679755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.679787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.679940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.679973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.680095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.703 [2024-11-17 14:37:03.680126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.703 qpair failed and we were unable to recover it. 00:27:14.703 [2024-11-17 14:37:03.680293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.680326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.680571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.680603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.680828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.680900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.681027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.681064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.681184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.681217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.681341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.681385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.681493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.681526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.681643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.681676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.681786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.681818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.681939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.681971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.682091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.682123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.682305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.682338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.682543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.682575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.682758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.682790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.682927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.682960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.683061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.683102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.683339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.683384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.683517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.683549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.683723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.683755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.683875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.683907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.684118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.684150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.684276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.684308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.684436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.684470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.684661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.684692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.684883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.684915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.685027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.685059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.685227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.685259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.685374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.685408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.685530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.685562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.685695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.685727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.685925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.685957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.686067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.686099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.686218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.686250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.686425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.686457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.686574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.686606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.686786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.686817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.686991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.687023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.687155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-17 14:37:03.687186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-17 14:37:03.687370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.687403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.687519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.687550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.687735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.687767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.687870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.687901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.688072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.688144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.688328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.688413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.688565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.688601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.688727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.688759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.688927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.688958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.689071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.689101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.689215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.689247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.689430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.689463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.689572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.689605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.689828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.689861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.690082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.690115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.690429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.690464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.690654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.690686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.690856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.690888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.691001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.691031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.691301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.691333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.691483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.691516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.691733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.691766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.691948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.691981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.692191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.692223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.692406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.692441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.692565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.692597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.692838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.692870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.693062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.693094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.693208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.693240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.693370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.693403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.693522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.693555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.693741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.693774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.693939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.693972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.694090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.694122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.694251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.694283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.694466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.694501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.694703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.694737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-17 14:37:03.694920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-17 14:37:03.694952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.695189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.695223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.695414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.695448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.695580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.695612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.695852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.695885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.696043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.696075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.696201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.696235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.696511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.696552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.696740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.696772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.697056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.697088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.697271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.697304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.697433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.697467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.697584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.697618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.699002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.699054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.699369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.699407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.700714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.700766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.701018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.701052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.701298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.701331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.701603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.701635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.701758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.701791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.701978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.702010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.702219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.702252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.702429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.702464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.702653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.702685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.702799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.702831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.703070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.703103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.703418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.703451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.703570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.703602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.703731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.703763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.703901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.703932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.704069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.704102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.704284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.704315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.704541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.704576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.704717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.704750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.704909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.704942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.705245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.705277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.705515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.705549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.705764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-17 14:37:03.705796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-17 14:37:03.705926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.705958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.706224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.706256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.706461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.706496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.706616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.706648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.706779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.706813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.706922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.706954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.708324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.708390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.708630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.708662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.708847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.708880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.709147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.709187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.709324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.709365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.709511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.709544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.709730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.709762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.709886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.709918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.710184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.710215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.710413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.710446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.710640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.710673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.710859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.710892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.711058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.711090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.711285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.711317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.711609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.711644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.711769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.711801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.712025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.712059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.712285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.712317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.712504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.712537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.712660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.712693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.712880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.712912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.713190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.713222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.713407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.713439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.713554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.713585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.713716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.713745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.714012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.714054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.714234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.714264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-17 14:37:03.714443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-17 14:37:03.714474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.714609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.714652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.715429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.715476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.715688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.715724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.715841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.715875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.716156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.716186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.716368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.716399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.716522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.716552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.716762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.716795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.717024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.717054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.717238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.717268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.717440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.717472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.717670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.717700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.717884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.717915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.718117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.718147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.718347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.718388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.718580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.718617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.718805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.718835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.719047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.719078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.719299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.719330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.719520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.719550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.719669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.719699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.719884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.719913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.720014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.720043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.720234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.720266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.720468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.720500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.720687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.720716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.720896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.720926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.721111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.721141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.721359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.721390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.721518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.721548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.721737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.721767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.721886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.721916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.722081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.722110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.722339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.722375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.722507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.722537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.722741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.722770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.722896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.722925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-17 14:37:03.723189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-17 14:37:03.723218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.723333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.723427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.723642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.723672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.723788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.723817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.724093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.724123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.724402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.724433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.724611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.724641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.724826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.724856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.725131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.725161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.725343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.725381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.725648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.725678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.725937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.725967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.726248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.726277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.726564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.726595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.726791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.726822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.726997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.727027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.727267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.727298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.727486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.727517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.727750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.727786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.727949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.727979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.728234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.728264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.728447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.728477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.728641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.728671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.728902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.728934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.729190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.729220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-17 14:37:03.729385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-17 14:37:03.729415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-17 14:37:03.729648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-17 14:37:03.729677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-17 14:37:03.729923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-17 14:37:03.729953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-17 14:37:03.730130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-17 14:37:03.730160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-17 14:37:03.730380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-17 14:37:03.730414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-17 14:37:03.730603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-17 14:37:03.730635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-17 14:37:03.730746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-17 14:37:03.730780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-17 14:37:03.731008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-17 14:37:03.731041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-17 14:37:03.731279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-17 14:37:03.731312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-17 14:37:03.731533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-17 14:37:03.731567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-17 14:37:03.731753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-17 14:37:03.731785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-17 14:37:03.732074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-17 14:37:03.732107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-17 14:37:03.732218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-17 14:37:03.732251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-17 14:37:03.732391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-17 14:37:03.732425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-17 14:37:03.732552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-17 14:37:03.732584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-17 14:37:03.732810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-17 14:37:03.732842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-17 14:37:03.733053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-17 14:37:03.733086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-17 14:37:03.733272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-17 14:37:03.733303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-17 14:37:03.733486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-17 14:37:03.733519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-17 14:37:03.733650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-17 14:37:03.733682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-17 14:37:03.733924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-17 14:37:03.733958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-17 14:37:03.734202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-17 14:37:03.734234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-17 14:37:03.734435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-17 14:37:03.734468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-17 14:37:03.734650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-17 14:37:03.734684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-17 14:37:03.734876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-17 14:37:03.734909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-17 14:37:03.735081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-17 14:37:03.735115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-17 14:37:03.735287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-17 14:37:03.735320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-17 14:37:03.735433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-17 14:37:03.735467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-17 14:37:03.735707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-17 14:37:03.735740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-17 14:37:03.735912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-17 14:37:03.735945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-17 14:37:03.736182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-17 14:37:03.736215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-17 14:37:03.736459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-17 14:37:03.736493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-17 14:37:03.736621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-17 14:37:03.736655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-17 14:37:03.736844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-17 14:37:03.736883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-17 14:37:03.737023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-17 14:37:03.737057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-17 14:37:03.737298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-17 14:37:03.737331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-17 14:37:03.737580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-17 14:37:03.737614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-17 14:37:03.737864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-17 14:37:03.737897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-17 14:37:03.738033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-17 14:37:03.738065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-17 14:37:03.738275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-17 14:37:03.738307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-17 14:37:03.738544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-17 14:37:03.738578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-17 14:37:03.738861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-17 14:37:03.738893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-17 14:37:03.739079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-17 14:37:03.739111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-17 14:37:03.739229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-17 14:37:03.739262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-17 14:37:03.739460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-17 14:37:03.739496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-17 14:37:03.739674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-17 14:37:03.739706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-17 14:37:03.739956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-17 14:37:03.739989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-17 14:37:03.740199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-17 14:37:03.740232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-17 14:37:03.740462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-17 14:37:03.740495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-17 14:37:03.740619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-17 14:37:03.740652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-17 14:37:03.740845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-17 14:37:03.740878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-17 14:37:03.741051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-17 14:37:03.741082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-17 14:37:03.741264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-17 14:37:03.741298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-17 14:37:03.741548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-17 14:37:03.741582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-17 14:37:03.741821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-17 14:37:03.741852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-17 14:37:03.741969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-17 14:37:03.742001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-17 14:37:03.742199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-17 14:37:03.742232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-17 14:37:03.742413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-17 14:37:03.742448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-17 14:37:03.742630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-17 14:37:03.742662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-17 14:37:03.742844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-17 14:37:03.742877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-17 14:37:03.743067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-17 14:37:03.743100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-17 14:37:03.743341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-17 14:37:03.743381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-17 14:37:03.743569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-17 14:37:03.743603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-17 14:37:03.743786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-17 14:37:03.743819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-17 14:37:03.744035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-17 14:37:03.744068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-17 14:37:03.744266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-17 14:37:03.744299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-17 14:37:03.744439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-17 14:37:03.744472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-17 14:37:03.744601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-17 14:37:03.744634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-17 14:37:03.744887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-17 14:37:03.744921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-17 14:37:03.745049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-17 14:37:03.745099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-17 14:37:03.745309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-17 14:37:03.745343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-17 14:37:03.745541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-17 14:37:03.745573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-17 14:37:03.745748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-17 14:37:03.745782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-17 14:37:03.745887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-17 14:37:03.745925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-17 14:37:03.746052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-17 14:37:03.746084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-17 14:37:03.746349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-17 14:37:03.746393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-17 14:37:03.746590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-17 14:37:03.746623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-17 14:37:03.746758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-17 14:37:03.746790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-17 14:37:03.746995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-17 14:37:03.747027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-17 14:37:03.747268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-17 14:37:03.747301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-17 14:37:03.747524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-17 14:37:03.747558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-17 14:37:03.747758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-17 14:37:03.747790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-17 14:37:03.747968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-17 14:37:03.748002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-17 14:37:03.748244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-17 14:37:03.748275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-17 14:37:03.748398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-17 14:37:03.748433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-17 14:37:03.748553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-17 14:37:03.748586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-17 14:37:03.748852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-17 14:37:03.748885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-17 14:37:03.749080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-17 14:37:03.749112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-17 14:37:03.749296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-17 14:37:03.749329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-17 14:37:03.749515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-17 14:37:03.749549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-17 14:37:03.749763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-17 14:37:03.749796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-17 14:37:03.750040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-17 14:37:03.750073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-17 14:37:03.750338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-17 14:37:03.750378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-17 14:37:03.750620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-17 14:37:03.750653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-17 14:37:03.750915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-17 14:37:03.750947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-17 14:37:03.751142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-17 14:37:03.751175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-17 14:37:03.751370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-17 14:37:03.751405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-17 14:37:03.751579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-17 14:37:03.751612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-17 14:37:03.751856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-17 14:37:03.751888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-17 14:37:03.752020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-17 14:37:03.752053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-17 14:37:03.752343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-17 14:37:03.752397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-17 14:37:03.752687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-17 14:37:03.752720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-17 14:37:03.752916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-17 14:37:03.752950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-17 14:37:03.753159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-17 14:37:03.753192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-17 14:37:03.753397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-17 14:37:03.753431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-17 14:37:03.753622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-17 14:37:03.753655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-17 14:37:03.753840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-17 14:37:03.753872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-17 14:37:03.754050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-17 14:37:03.754082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-17 14:37:03.754325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-17 14:37:03.754368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-17 14:37:03.754488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-17 14:37:03.754522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-17 14:37:03.754719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-17 14:37:03.754751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-17 14:37:03.754879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-17 14:37:03.754912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-17 14:37:03.755154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-17 14:37:03.755188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-17 14:37:03.755471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-17 14:37:03.755510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-17 14:37:03.755719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-17 14:37:03.755752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-17 14:37:03.755947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-17 14:37:03.755980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-17 14:37:03.756247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-17 14:37:03.756281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-17 14:37:03.756565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-17 14:37:03.756600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-17 14:37:03.756869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-17 14:37:03.756903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-17 14:37:03.757026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-17 14:37:03.757058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-17 14:37:03.757249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-17 14:37:03.757282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-17 14:37:03.757414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-17 14:37:03.757448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-17 14:37:03.757564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-17 14:37:03.757595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-17 14:37:03.757834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-17 14:37:03.757867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-17 14:37:03.758064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-17 14:37:03.758097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-17 14:37:03.758295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-17 14:37:03.758327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-17 14:37:03.758566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-17 14:37:03.758600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-17 14:37:03.758812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-17 14:37:03.758846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-17 14:37:03.759049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-17 14:37:03.759082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-17 14:37:03.759284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-17 14:37:03.759316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-17 14:37:03.759450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-17 14:37:03.759484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-17 14:37:03.759594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-17 14:37:03.759628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-17 14:37:03.759839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-17 14:37:03.759872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-17 14:37:03.760140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-17 14:37:03.760172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-17 14:37:03.760321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-17 14:37:03.760361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-17 14:37:03.760643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-17 14:37:03.760676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-17 14:37:03.760847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-17 14:37:03.760880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-17 14:37:03.760983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-17 14:37:03.761016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-17 14:37:03.761267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-17 14:37:03.761299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-17 14:37:03.761505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-17 14:37:03.761539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-17 14:37:03.761791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-17 14:37:03.761825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-17 14:37:03.762024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-17 14:37:03.762058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-17 14:37:03.762272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-17 14:37:03.762305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-17 14:37:03.762424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-17 14:37:03.762458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-17 14:37:03.762654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-17 14:37:03.762687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-17 14:37:03.762887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-17 14:37:03.762919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-17 14:37:03.763226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-17 14:37:03.763260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-17 14:37:03.763402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-17 14:37:03.763438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-17 14:37:03.763687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-17 14:37:03.763721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-17 14:37:03.763912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-17 14:37:03.763944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-17 14:37:03.764210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-17 14:37:03.764244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-17 14:37:03.764526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-17 14:37:03.764564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-17 14:37:03.764759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-17 14:37:03.764791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-17 14:37:03.764978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-17 14:37:03.765017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-17 14:37:03.765233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-17 14:37:03.765266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-17 14:37:03.765452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-17 14:37:03.765486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-17 14:37:03.765752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-17 14:37:03.765784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-17 14:37:03.765907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-17 14:37:03.765940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-17 14:37:03.766152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-17 14:37:03.766185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-17 14:37:03.766458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-17 14:37:03.766492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-17 14:37:03.766699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-17 14:37:03.766732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-17 14:37:03.767022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-17 14:37:03.767056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-17 14:37:03.767182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-17 14:37:03.767216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-17 14:37:03.767421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-17 14:37:03.767456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-17 14:37:03.767634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-17 14:37:03.767667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-17 14:37:03.767885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-17 14:37:03.767918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-17 14:37:03.768056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-17 14:37:03.768088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-17 14:37:03.768290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-17 14:37:03.768323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-17 14:37:03.768534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-17 14:37:03.768568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-17 14:37:03.768711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-17 14:37:03.768744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-17 14:37:03.768996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-17 14:37:03.769030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-17 14:37:03.769274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-17 14:37:03.769307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-17 14:37:03.769598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-17 14:37:03.769633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-17 14:37:03.769743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-17 14:37:03.769775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-17 14:37:03.769980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-17 14:37:03.770015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-17 14:37:03.770191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-17 14:37:03.770224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-17 14:37:03.770422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-17 14:37:03.770456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-17 14:37:03.770696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-17 14:37:03.770729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-17 14:37:03.770875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-17 14:37:03.770908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-17 14:37:03.771156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-17 14:37:03.771189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-17 14:37:03.771391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-17 14:37:03.771427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-17 14:37:03.771613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-17 14:37:03.771646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-17 14:37:03.771862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-17 14:37:03.771896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-17 14:37:03.772077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-17 14:37:03.772110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-17 14:37:03.772378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-17 14:37:03.772412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-17 14:37:03.772590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-17 14:37:03.772624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-17 14:37:03.772891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-17 14:37:03.772924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-17 14:37:03.773169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-17 14:37:03.773203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-17 14:37:03.773383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-17 14:37:03.773418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-17 14:37:03.773711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-17 14:37:03.773743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-17 14:37:03.773923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-17 14:37:03.773955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-17 14:37:03.774078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-17 14:37:03.774111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-17 14:37:03.774367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-17 14:37:03.774402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-17 14:37:03.774685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-17 14:37:03.774724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-17 14:37:03.775005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-17 14:37:03.775038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-17 14:37:03.775215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-17 14:37:03.775249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-17 14:37:03.775425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-17 14:37:03.775460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-17 14:37:03.775648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-17 14:37:03.775681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-17 14:37:03.775819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-17 14:37:03.775852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-17 14:37:03.776115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-17 14:37:03.776149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-17 14:37:03.776348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-17 14:37:03.776407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-17 14:37:03.776683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-17 14:37:03.776716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-17 14:37:03.776976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-17 14:37:03.777008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-17 14:37:03.777205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-17 14:37:03.777238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-17 14:37:03.777510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-17 14:37:03.777544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-17 14:37:03.777680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-17 14:37:03.777714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-17 14:37:03.777893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-17 14:37:03.777926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-17 14:37:03.778136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-17 14:37:03.778168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-17 14:37:03.778427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-17 14:37:03.778460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-17 14:37:03.778761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-17 14:37:03.778796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-17 14:37:03.779079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-17 14:37:03.779112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-17 14:37:03.779307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-17 14:37:03.779341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-17 14:37:03.779530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-17 14:37:03.779564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-17 14:37:03.779828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-17 14:37:03.779862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-17 14:37:03.780128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-17 14:37:03.780161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-17 14:37:03.780270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-17 14:37:03.780304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-17 14:37:03.780585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-17 14:37:03.780620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-17 14:37:03.780874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-17 14:37:03.780907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-17 14:37:03.781039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-17 14:37:03.781073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-17 14:37:03.781340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-17 14:37:03.781382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-17 14:37:03.781586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-17 14:37:03.781619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-17 14:37:03.781836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-17 14:37:03.781870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-17 14:37:03.782070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-17 14:37:03.782103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-17 14:37:03.782404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-17 14:37:03.782438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-17 14:37:03.782569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-17 14:37:03.782602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-17 14:37:03.782742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-17 14:37:03.782776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-17 14:37:03.782955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-17 14:37:03.782989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-17 14:37:03.783256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-17 14:37:03.783289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-17 14:37:03.783581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.732 [2024-11-17 14:37:03.783616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.732 qpair failed and we were unable to recover it. 00:27:14.732 [2024-11-17 14:37:03.783881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.732 [2024-11-17 14:37:03.783915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.732 qpair failed and we were unable to recover it. 00:27:14.732 [2024-11-17 14:37:03.784179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.732 [2024-11-17 14:37:03.784213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.732 qpair failed and we were unable to recover it. 00:27:14.732 [2024-11-17 14:37:03.784414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.732 [2024-11-17 14:37:03.784449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.732 qpair failed and we were unable to recover it. 00:27:14.732 [2024-11-17 14:37:03.784748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.732 [2024-11-17 14:37:03.784782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.732 qpair failed and we were unable to recover it. 00:27:14.732 [2024-11-17 14:37:03.784970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.732 [2024-11-17 14:37:03.785010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.732 qpair failed and we were unable to recover it. 00:27:14.732 [2024-11-17 14:37:03.785198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.732 [2024-11-17 14:37:03.785231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.732 qpair failed and we were unable to recover it. 00:27:14.732 [2024-11-17 14:37:03.785427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.732 [2024-11-17 14:37:03.785461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.732 qpair failed and we were unable to recover it. 00:27:14.732 [2024-11-17 14:37:03.785581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.732 [2024-11-17 14:37:03.785614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.732 qpair failed and we were unable to recover it. 00:27:14.732 [2024-11-17 14:37:03.785918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.732 [2024-11-17 14:37:03.785952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.732 qpair failed and we were unable to recover it. 00:27:14.732 [2024-11-17 14:37:03.786140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.732 [2024-11-17 14:37:03.786173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.732 qpair failed and we were unable to recover it. 00:27:14.732 [2024-11-17 14:37:03.786379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.732 [2024-11-17 14:37:03.786413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.732 qpair failed and we were unable to recover it. 00:27:14.732 [2024-11-17 14:37:03.786594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.733 [2024-11-17 14:37:03.786628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.733 qpair failed and we were unable to recover it. 00:27:14.733 [2024-11-17 14:37:03.786917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.733 [2024-11-17 14:37:03.786950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.733 qpair failed and we were unable to recover it. 00:27:14.733 [2024-11-17 14:37:03.787218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.733 [2024-11-17 14:37:03.787252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.733 qpair failed and we were unable to recover it. 00:27:14.733 [2024-11-17 14:37:03.787437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.733 [2024-11-17 14:37:03.787471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.733 qpair failed and we were unable to recover it. 00:27:14.733 [2024-11-17 14:37:03.787684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.734 [2024-11-17 14:37:03.787719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.734 qpair failed and we were unable to recover it. 00:27:14.734 [2024-11-17 14:37:03.787895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.734 [2024-11-17 14:37:03.787928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.734 qpair failed and we were unable to recover it. 00:27:14.734 [2024-11-17 14:37:03.788131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.734 [2024-11-17 14:37:03.788163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.734 qpair failed and we were unable to recover it. 00:27:14.734 [2024-11-17 14:37:03.788381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.734 [2024-11-17 14:37:03.788417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.734 qpair failed and we were unable to recover it. 00:27:14.734 [2024-11-17 14:37:03.788685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.734 [2024-11-17 14:37:03.788718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.734 qpair failed and we were unable to recover it. 00:27:14.734 [2024-11-17 14:37:03.789021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.734 [2024-11-17 14:37:03.789054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.734 qpair failed and we were unable to recover it. 00:27:14.734 [2024-11-17 14:37:03.789230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.734 [2024-11-17 14:37:03.789266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.734 qpair failed and we were unable to recover it. 00:27:14.734 [2024-11-17 14:37:03.789515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.734 [2024-11-17 14:37:03.789547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.734 qpair failed and we were unable to recover it. 00:27:14.734 [2024-11-17 14:37:03.789801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.734 [2024-11-17 14:37:03.789834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.734 qpair failed and we were unable to recover it. 00:27:14.734 [2024-11-17 14:37:03.789946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.734 [2024-11-17 14:37:03.789978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.734 qpair failed and we were unable to recover it. 00:27:14.734 [2024-11-17 14:37:03.790249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.734 [2024-11-17 14:37:03.790282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.735 qpair failed and we were unable to recover it. 00:27:14.735 [2024-11-17 14:37:03.790540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.735 [2024-11-17 14:37:03.790575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.735 qpair failed and we were unable to recover it. 00:27:14.735 [2024-11-17 14:37:03.790875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.735 [2024-11-17 14:37:03.790909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.735 qpair failed and we were unable to recover it. 00:27:14.735 [2024-11-17 14:37:03.791033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.735 [2024-11-17 14:37:03.791066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.735 qpair failed and we were unable to recover it. 00:27:14.735 [2024-11-17 14:37:03.791197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.735 [2024-11-17 14:37:03.791231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.735 qpair failed and we were unable to recover it. 00:27:14.735 [2024-11-17 14:37:03.791361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.735 [2024-11-17 14:37:03.791397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.735 qpair failed and we were unable to recover it. 00:27:14.735 [2024-11-17 14:37:03.791672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.735 [2024-11-17 14:37:03.791706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.735 qpair failed and we were unable to recover it. 00:27:14.735 [2024-11-17 14:37:03.791927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.735 [2024-11-17 14:37:03.791962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.735 qpair failed and we were unable to recover it. 00:27:14.735 [2024-11-17 14:37:03.792238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.735 [2024-11-17 14:37:03.792271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.735 qpair failed and we were unable to recover it. 00:27:14.735 [2024-11-17 14:37:03.792549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.735 [2024-11-17 14:37:03.792583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.735 qpair failed and we were unable to recover it. 00:27:14.735 [2024-11-17 14:37:03.792834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.735 [2024-11-17 14:37:03.792867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.735 qpair failed and we were unable to recover it. 00:27:14.735 [2024-11-17 14:37:03.793119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.735 [2024-11-17 14:37:03.793154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.735 qpair failed and we were unable to recover it. 00:27:14.735 [2024-11-17 14:37:03.793373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.735 [2024-11-17 14:37:03.793407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.735 qpair failed and we were unable to recover it. 00:27:14.736 [2024-11-17 14:37:03.793591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.736 [2024-11-17 14:37:03.793625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.736 qpair failed and we were unable to recover it. 00:27:14.736 [2024-11-17 14:37:03.796578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.736 [2024-11-17 14:37:03.796616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.736 qpair failed and we were unable to recover it. 00:27:14.736 [2024-11-17 14:37:03.796887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.736 [2024-11-17 14:37:03.796921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.736 qpair failed and we were unable to recover it. 00:27:14.736 [2024-11-17 14:37:03.797118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.736 [2024-11-17 14:37:03.797151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.736 qpair failed and we were unable to recover it. 00:27:14.736 [2024-11-17 14:37:03.797374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.736 [2024-11-17 14:37:03.797409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.736 qpair failed and we were unable to recover it. 00:27:14.736 [2024-11-17 14:37:03.797606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.736 [2024-11-17 14:37:03.797640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.736 qpair failed and we were unable to recover it. 00:27:14.736 [2024-11-17 14:37:03.797910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.736 [2024-11-17 14:37:03.797951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.736 qpair failed and we were unable to recover it. 00:27:14.736 [2024-11-17 14:37:03.798207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.736 [2024-11-17 14:37:03.798240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.736 qpair failed and we were unable to recover it. 00:27:14.736 [2024-11-17 14:37:03.798542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.736 [2024-11-17 14:37:03.798576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.736 qpair failed and we were unable to recover it. 00:27:14.736 [2024-11-17 14:37:03.798727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.736 [2024-11-17 14:37:03.798761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.736 qpair failed and we were unable to recover it. 00:27:14.736 [2024-11-17 14:37:03.799007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.737 [2024-11-17 14:37:03.799058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.737 qpair failed and we were unable to recover it. 00:27:14.737 [2024-11-17 14:37:03.799307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.737 [2024-11-17 14:37:03.799341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.737 qpair failed and we were unable to recover it. 00:27:14.737 [2024-11-17 14:37:03.799561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.737 [2024-11-17 14:37:03.799594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.737 qpair failed and we were unable to recover it. 00:27:14.737 [2024-11-17 14:37:03.799785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.737 [2024-11-17 14:37:03.799820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.737 qpair failed and we were unable to recover it. 00:27:14.737 [2024-11-17 14:37:03.800020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.737 [2024-11-17 14:37:03.800053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.737 qpair failed and we were unable to recover it. 00:27:14.737 [2024-11-17 14:37:03.800239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.737 [2024-11-17 14:37:03.800271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.737 qpair failed and we were unable to recover it. 00:27:14.737 [2024-11-17 14:37:03.800518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.737 [2024-11-17 14:37:03.800552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.737 qpair failed and we were unable to recover it. 00:27:14.737 [2024-11-17 14:37:03.800756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.737 [2024-11-17 14:37:03.800789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.737 qpair failed and we were unable to recover it. 00:27:14.737 [2024-11-17 14:37:03.801059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.737 [2024-11-17 14:37:03.801093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.737 qpair failed and we were unable to recover it. 00:27:14.737 [2024-11-17 14:37:03.801373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.737 [2024-11-17 14:37:03.801409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.737 qpair failed and we were unable to recover it. 00:27:14.737 [2024-11-17 14:37:03.801691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.737 [2024-11-17 14:37:03.801726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.737 qpair failed and we were unable to recover it. 00:27:14.737 [2024-11-17 14:37:03.801911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.737 [2024-11-17 14:37:03.801944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.737 qpair failed and we were unable to recover it. 00:27:14.737 [2024-11-17 14:37:03.802212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.737 [2024-11-17 14:37:03.802244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.737 qpair failed and we were unable to recover it. 00:27:14.737 [2024-11-17 14:37:03.802373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.738 [2024-11-17 14:37:03.802407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.738 qpair failed and we were unable to recover it. 00:27:14.738 [2024-11-17 14:37:03.802604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.738 [2024-11-17 14:37:03.802639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.738 qpair failed and we were unable to recover it. 00:27:14.738 [2024-11-17 14:37:03.802832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.738 [2024-11-17 14:37:03.802865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.738 qpair failed and we were unable to recover it. 00:27:14.738 [2024-11-17 14:37:03.803065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.738 [2024-11-17 14:37:03.803098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.738 qpair failed and we were unable to recover it. 00:27:14.738 [2024-11-17 14:37:03.803374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.738 [2024-11-17 14:37:03.803409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.738 qpair failed and we were unable to recover it. 00:27:14.738 [2024-11-17 14:37:03.803625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.738 [2024-11-17 14:37:03.803658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.738 qpair failed and we were unable to recover it. 00:27:14.738 [2024-11-17 14:37:03.803794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.738 [2024-11-17 14:37:03.803826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.738 qpair failed and we were unable to recover it. 00:27:14.738 [2024-11-17 14:37:03.804037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.738 [2024-11-17 14:37:03.804069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.738 qpair failed and we were unable to recover it. 00:27:14.738 [2024-11-17 14:37:03.804254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.738 [2024-11-17 14:37:03.804287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.738 qpair failed and we were unable to recover it. 00:27:14.738 [2024-11-17 14:37:03.804470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.738 [2024-11-17 14:37:03.804506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.738 qpair failed and we were unable to recover it. 00:27:14.738 [2024-11-17 14:37:03.804646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.738 [2024-11-17 14:37:03.804679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.738 qpair failed and we were unable to recover it. 00:27:14.738 [2024-11-17 14:37:03.804860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.738 [2024-11-17 14:37:03.804893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.738 qpair failed and we were unable to recover it. 00:27:14.738 [2024-11-17 14:37:03.805113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.738 [2024-11-17 14:37:03.805147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.738 qpair failed and we were unable to recover it. 00:27:14.738 [2024-11-17 14:37:03.805349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.738 [2024-11-17 14:37:03.805393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.738 qpair failed and we were unable to recover it. 00:27:14.738 [2024-11-17 14:37:03.805583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.738 [2024-11-17 14:37:03.805617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.738 qpair failed and we were unable to recover it. 00:27:14.739 [2024-11-17 14:37:03.805888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.739 [2024-11-17 14:37:03.805923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.739 qpair failed and we were unable to recover it. 00:27:14.739 [2024-11-17 14:37:03.806050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.739 [2024-11-17 14:37:03.806082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.739 qpair failed and we were unable to recover it. 00:27:14.739 [2024-11-17 14:37:03.806212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.739 [2024-11-17 14:37:03.806247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.739 qpair failed and we were unable to recover it. 00:27:14.739 [2024-11-17 14:37:03.806440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.739 [2024-11-17 14:37:03.806473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.739 qpair failed and we were unable to recover it. 00:27:14.739 [2024-11-17 14:37:03.806724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.739 [2024-11-17 14:37:03.806758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.739 qpair failed and we were unable to recover it. 00:27:14.739 [2024-11-17 14:37:03.806963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.739 [2024-11-17 14:37:03.806997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.739 qpair failed and we were unable to recover it. 00:27:14.739 [2024-11-17 14:37:03.807266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.739 [2024-11-17 14:37:03.807300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.739 qpair failed and we were unable to recover it. 00:27:14.739 [2024-11-17 14:37:03.807439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.739 [2024-11-17 14:37:03.807475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.739 qpair failed and we were unable to recover it. 00:27:14.739 [2024-11-17 14:37:03.807750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.739 [2024-11-17 14:37:03.807790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.739 qpair failed and we were unable to recover it. 00:27:14.739 [2024-11-17 14:37:03.807971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.739 [2024-11-17 14:37:03.808006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.739 qpair failed and we were unable to recover it. 00:27:14.739 [2024-11-17 14:37:03.808193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.739 [2024-11-17 14:37:03.808226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.739 qpair failed and we were unable to recover it. 00:27:14.739 [2024-11-17 14:37:03.808403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.739 [2024-11-17 14:37:03.808437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.739 qpair failed and we were unable to recover it. 00:27:14.739 [2024-11-17 14:37:03.808688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.739 [2024-11-17 14:37:03.808720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.739 qpair failed and we were unable to recover it. 00:27:14.740 [2024-11-17 14:37:03.808899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.740 [2024-11-17 14:37:03.808932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.740 qpair failed and we were unable to recover it. 00:27:14.740 [2024-11-17 14:37:03.809053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.740 [2024-11-17 14:37:03.809087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.740 qpair failed and we were unable to recover it. 00:27:14.740 [2024-11-17 14:37:03.809284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.740 [2024-11-17 14:37:03.809318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.740 qpair failed and we were unable to recover it. 00:27:14.740 [2024-11-17 14:37:03.809582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.740 [2024-11-17 14:37:03.809615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.740 qpair failed and we were unable to recover it. 00:27:14.740 [2024-11-17 14:37:03.809796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.740 [2024-11-17 14:37:03.809829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.740 qpair failed and we were unable to recover it. 00:27:14.740 [2024-11-17 14:37:03.810080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.740 [2024-11-17 14:37:03.810113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.740 qpair failed and we were unable to recover it. 00:27:14.740 [2024-11-17 14:37:03.810391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.740 [2024-11-17 14:37:03.810426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.740 qpair failed and we were unable to recover it. 00:27:14.740 [2024-11-17 14:37:03.810627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.740 [2024-11-17 14:37:03.810661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.740 qpair failed and we were unable to recover it. 00:27:14.740 [2024-11-17 14:37:03.810877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.740 [2024-11-17 14:37:03.810909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.740 qpair failed and we were unable to recover it. 00:27:14.740 [2024-11-17 14:37:03.811131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.740 [2024-11-17 14:37:03.811166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.740 qpair failed and we were unable to recover it. 00:27:14.740 [2024-11-17 14:37:03.811315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.740 [2024-11-17 14:37:03.811348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.740 qpair failed and we were unable to recover it. 00:27:14.740 [2024-11-17 14:37:03.811547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.740 [2024-11-17 14:37:03.811581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.740 qpair failed and we were unable to recover it. 00:27:14.740 [2024-11-17 14:37:03.811684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.740 [2024-11-17 14:37:03.811717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.740 qpair failed and we were unable to recover it. 00:27:14.740 [2024-11-17 14:37:03.811972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.740 [2024-11-17 14:37:03.812006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.741 qpair failed and we were unable to recover it. 00:27:14.741 [2024-11-17 14:37:03.812190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.741 [2024-11-17 14:37:03.812223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.741 qpair failed and we were unable to recover it. 00:27:14.741 [2024-11-17 14:37:03.812411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.741 [2024-11-17 14:37:03.812446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.741 qpair failed and we were unable to recover it. 00:27:14.741 [2024-11-17 14:37:03.812572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.741 [2024-11-17 14:37:03.812607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.741 qpair failed and we were unable to recover it. 00:27:14.741 [2024-11-17 14:37:03.812749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.741 [2024-11-17 14:37:03.812783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.741 qpair failed and we were unable to recover it. 00:27:14.741 [2024-11-17 14:37:03.812998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.741 [2024-11-17 14:37:03.813031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.741 qpair failed and we were unable to recover it. 00:27:14.741 [2024-11-17 14:37:03.813252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.741 [2024-11-17 14:37:03.813288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.741 qpair failed and we were unable to recover it. 00:27:14.741 [2024-11-17 14:37:03.813538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.741 [2024-11-17 14:37:03.813572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.741 qpair failed and we were unable to recover it. 00:27:14.741 [2024-11-17 14:37:03.813769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.741 [2024-11-17 14:37:03.813803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:14.741 qpair failed and we were unable to recover it. 00:27:14.741 [2024-11-17 14:37:03.814056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.741 [2024-11-17 14:37:03.814136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.741 qpair failed and we were unable to recover it. 00:27:14.741 [2024-11-17 14:37:03.814374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.741 [2024-11-17 14:37:03.814414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.741 qpair failed and we were unable to recover it. 00:27:14.741 [2024-11-17 14:37:03.814686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.741 [2024-11-17 14:37:03.814722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.741 qpair failed and we were unable to recover it. 00:27:14.741 [2024-11-17 14:37:03.814863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.742 [2024-11-17 14:37:03.814899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.742 qpair failed and we were unable to recover it. 00:27:14.742 [2024-11-17 14:37:03.815027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.742 [2024-11-17 14:37:03.815061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.742 qpair failed and we were unable to recover it. 00:27:14.742 [2024-11-17 14:37:03.815338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.742 [2024-11-17 14:37:03.815384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.742 qpair failed and we were unable to recover it. 00:27:14.742 [2024-11-17 14:37:03.815505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.742 [2024-11-17 14:37:03.815540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.742 qpair failed and we were unable to recover it. 00:27:14.742 [2024-11-17 14:37:03.815831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.742 [2024-11-17 14:37:03.815863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.742 qpair failed and we were unable to recover it. 00:27:14.742 [2024-11-17 14:37:03.816002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.742 [2024-11-17 14:37:03.816035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.742 qpair failed and we were unable to recover it. 00:27:14.742 [2024-11-17 14:37:03.816159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.742 [2024-11-17 14:37:03.816192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.742 qpair failed and we were unable to recover it. 00:27:14.742 [2024-11-17 14:37:03.816381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.742 [2024-11-17 14:37:03.816417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.742 qpair failed and we were unable to recover it. 00:27:14.742 [2024-11-17 14:37:03.816639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.742 [2024-11-17 14:37:03.816674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.742 qpair failed and we were unable to recover it. 00:27:14.742 [2024-11-17 14:37:03.816856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.742 [2024-11-17 14:37:03.816889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.742 qpair failed and we were unable to recover it. 00:27:14.742 [2024-11-17 14:37:03.817122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.742 [2024-11-17 14:37:03.817158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.742 qpair failed and we were unable to recover it. 00:27:14.742 [2024-11-17 14:37:03.817373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.742 [2024-11-17 14:37:03.817412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.742 qpair failed and we were unable to recover it. 00:27:14.742 [2024-11-17 14:37:03.817620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.742 [2024-11-17 14:37:03.817657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.742 qpair failed and we were unable to recover it. 00:27:14.742 [2024-11-17 14:37:03.817846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.742 [2024-11-17 14:37:03.817880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.742 qpair failed and we were unable to recover it. 00:27:14.742 [2024-11-17 14:37:03.818136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.742 [2024-11-17 14:37:03.818172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.742 qpair failed and we were unable to recover it. 00:27:14.742 [2024-11-17 14:37:03.818324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.742 [2024-11-17 14:37:03.818366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.742 qpair failed and we were unable to recover it. 00:27:14.742 [2024-11-17 14:37:03.818506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.742 [2024-11-17 14:37:03.818542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.743 qpair failed and we were unable to recover it. 00:27:14.743 [2024-11-17 14:37:03.818731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.743 [2024-11-17 14:37:03.818766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.743 qpair failed and we were unable to recover it. 00:27:14.743 [2024-11-17 14:37:03.818961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.743 [2024-11-17 14:37:03.818994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.743 qpair failed and we were unable to recover it. 00:27:14.743 [2024-11-17 14:37:03.819173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.743 [2024-11-17 14:37:03.819207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.743 qpair failed and we were unable to recover it. 00:27:14.743 [2024-11-17 14:37:03.819398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.743 [2024-11-17 14:37:03.819433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.743 qpair failed and we were unable to recover it. 00:27:14.743 [2024-11-17 14:37:03.819581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.743 [2024-11-17 14:37:03.819617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.743 qpair failed and we were unable to recover it. 00:27:14.743 [2024-11-17 14:37:03.819738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.743 [2024-11-17 14:37:03.819772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.743 qpair failed and we were unable to recover it. 00:27:14.743 [2024-11-17 14:37:03.819964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.743 [2024-11-17 14:37:03.819999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.743 qpair failed and we were unable to recover it. 00:27:14.743 [2024-11-17 14:37:03.820214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.743 [2024-11-17 14:37:03.820248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.743 qpair failed and we were unable to recover it. 00:27:14.743 [2024-11-17 14:37:03.820442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.743 [2024-11-17 14:37:03.820477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.743 qpair failed and we were unable to recover it. 00:27:14.743 [2024-11-17 14:37:03.820671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.743 [2024-11-17 14:37:03.820706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.743 qpair failed and we were unable to recover it. 00:27:14.743 [2024-11-17 14:37:03.820914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.743 [2024-11-17 14:37:03.820949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.743 qpair failed and we were unable to recover it. 00:27:14.743 [2024-11-17 14:37:03.821230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.743 [2024-11-17 14:37:03.821266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.743 qpair failed and we were unable to recover it. 00:27:14.743 [2024-11-17 14:37:03.821405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.743 [2024-11-17 14:37:03.821440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.743 qpair failed and we were unable to recover it. 00:27:14.743 [2024-11-17 14:37:03.821712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.744 [2024-11-17 14:37:03.821745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.744 qpair failed and we were unable to recover it. 00:27:14.744 [2024-11-17 14:37:03.821874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.744 [2024-11-17 14:37:03.821908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.744 qpair failed and we were unable to recover it. 00:27:14.744 [2024-11-17 14:37:03.822038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.744 [2024-11-17 14:37:03.822073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.744 qpair failed and we were unable to recover it. 00:27:14.744 [2024-11-17 14:37:03.822265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.744 [2024-11-17 14:37:03.822297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.744 qpair failed and we were unable to recover it. 00:27:14.744 [2024-11-17 14:37:03.822422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.744 [2024-11-17 14:37:03.822478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.744 qpair failed and we were unable to recover it. 00:27:14.744 [2024-11-17 14:37:03.822684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.744 [2024-11-17 14:37:03.822720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.744 qpair failed and we were unable to recover it. 00:27:14.744 [2024-11-17 14:37:03.822933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.744 [2024-11-17 14:37:03.822968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.744 qpair failed and we were unable to recover it. 00:27:14.744 [2024-11-17 14:37:03.823149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.744 [2024-11-17 14:37:03.823189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.744 qpair failed and we were unable to recover it. 00:27:14.744 [2024-11-17 14:37:03.823321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.744 [2024-11-17 14:37:03.823367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.744 qpair failed and we were unable to recover it. 00:27:14.744 [2024-11-17 14:37:03.823579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.744 [2024-11-17 14:37:03.823614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.744 qpair failed and we were unable to recover it. 00:27:14.744 [2024-11-17 14:37:03.823868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.744 [2024-11-17 14:37:03.823903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.744 qpair failed and we were unable to recover it. 00:27:14.744 [2024-11-17 14:37:03.824098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.744 [2024-11-17 14:37:03.824134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.744 qpair failed and we were unable to recover it. 00:27:14.744 [2024-11-17 14:37:03.824407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.744 [2024-11-17 14:37:03.824443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.744 qpair failed and we were unable to recover it. 00:27:14.744 [2024-11-17 14:37:03.824703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.744 [2024-11-17 14:37:03.824737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.744 qpair failed and we were unable to recover it. 00:27:14.744 [2024-11-17 14:37:03.824930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.744 [2024-11-17 14:37:03.824964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.744 qpair failed and we were unable to recover it. 00:27:14.744 [2024-11-17 14:37:03.825166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.745 [2024-11-17 14:37:03.825200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.745 qpair failed and we were unable to recover it. 00:27:14.745 [2024-11-17 14:37:03.825493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.745 [2024-11-17 14:37:03.825529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.745 qpair failed and we were unable to recover it. 00:27:14.745 [2024-11-17 14:37:03.825793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.745 [2024-11-17 14:37:03.825827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.745 qpair failed and we were unable to recover it. 00:27:14.745 [2024-11-17 14:37:03.826126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.745 [2024-11-17 14:37:03.826162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.745 qpair failed and we were unable to recover it. 00:27:14.745 [2024-11-17 14:37:03.826428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.745 [2024-11-17 14:37:03.826469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.745 qpair failed and we were unable to recover it. 00:27:14.745 [2024-11-17 14:37:03.826606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.745 [2024-11-17 14:37:03.826639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.745 qpair failed and we were unable to recover it. 00:27:14.745 [2024-11-17 14:37:03.826785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.745 [2024-11-17 14:37:03.826818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.745 qpair failed and we were unable to recover it. 00:27:14.745 [2024-11-17 14:37:03.827015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.745 [2024-11-17 14:37:03.827050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.745 qpair failed and we were unable to recover it. 00:27:14.745 [2024-11-17 14:37:03.827320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.745 [2024-11-17 14:37:03.827364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.745 qpair failed and we were unable to recover it. 00:27:14.745 [2024-11-17 14:37:03.827647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.745 [2024-11-17 14:37:03.827683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.745 qpair failed and we were unable to recover it. 00:27:14.745 [2024-11-17 14:37:03.827896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.745 [2024-11-17 14:37:03.827930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.745 qpair failed and we were unable to recover it. 00:27:14.745 [2024-11-17 14:37:03.828124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.745 [2024-11-17 14:37:03.828158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.745 qpair failed and we were unable to recover it. 00:27:14.745 [2024-11-17 14:37:03.828433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.745 [2024-11-17 14:37:03.828470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.745 qpair failed and we were unable to recover it. 00:27:14.745 [2024-11-17 14:37:03.828601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.745 [2024-11-17 14:37:03.828634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.745 qpair failed and we were unable to recover it. 00:27:14.746 [2024-11-17 14:37:03.828836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.746 [2024-11-17 14:37:03.828871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.746 qpair failed and we were unable to recover it. 00:27:14.746 [2024-11-17 14:37:03.829145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.746 [2024-11-17 14:37:03.829179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.746 qpair failed and we were unable to recover it. 00:27:14.746 [2024-11-17 14:37:03.829382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.746 [2024-11-17 14:37:03.829416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.746 qpair failed and we were unable to recover it. 00:27:14.746 [2024-11-17 14:37:03.829601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.746 [2024-11-17 14:37:03.829636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.746 qpair failed and we were unable to recover it. 00:27:14.746 [2024-11-17 14:37:03.829888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.746 [2024-11-17 14:37:03.829924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.746 qpair failed and we were unable to recover it. 00:27:14.746 [2024-11-17 14:37:03.830227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.746 [2024-11-17 14:37:03.830262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.746 qpair failed and we were unable to recover it. 00:27:14.746 [2024-11-17 14:37:03.830452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.746 [2024-11-17 14:37:03.830488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.746 qpair failed and we were unable to recover it. 00:27:14.746 [2024-11-17 14:37:03.830684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.746 [2024-11-17 14:37:03.830719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.746 qpair failed and we were unable to recover it. 00:27:14.746 [2024-11-17 14:37:03.830936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.747 [2024-11-17 14:37:03.830970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.747 qpair failed and we were unable to recover it. 00:27:14.747 [2024-11-17 14:37:03.831182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.747 [2024-11-17 14:37:03.831216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.747 qpair failed and we were unable to recover it. 00:27:14.747 [2024-11-17 14:37:03.831342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.747 [2024-11-17 14:37:03.831393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.747 qpair failed and we were unable to recover it. 00:27:14.747 [2024-11-17 14:37:03.831600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.747 [2024-11-17 14:37:03.831633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.747 qpair failed and we were unable to recover it. 00:27:14.747 [2024-11-17 14:37:03.831907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.747 [2024-11-17 14:37:03.831942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.747 qpair failed and we were unable to recover it. 00:27:14.747 [2024-11-17 14:37:03.832078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.747 [2024-11-17 14:37:03.832113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.747 qpair failed and we were unable to recover it. 00:27:14.747 [2024-11-17 14:37:03.832371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.747 [2024-11-17 14:37:03.832407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.747 qpair failed and we were unable to recover it. 00:27:14.747 [2024-11-17 14:37:03.832620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.747 [2024-11-17 14:37:03.832654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.747 qpair failed and we were unable to recover it. 00:27:14.747 [2024-11-17 14:37:03.832833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.747 [2024-11-17 14:37:03.832868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.747 qpair failed and we were unable to recover it. 00:27:14.748 [2024-11-17 14:37:03.833118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.748 [2024-11-17 14:37:03.833153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.748 qpair failed and we were unable to recover it. 00:27:14.748 [2024-11-17 14:37:03.833380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.748 [2024-11-17 14:37:03.833421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.748 qpair failed and we were unable to recover it. 00:27:14.748 [2024-11-17 14:37:03.833641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.748 [2024-11-17 14:37:03.833677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.748 qpair failed and we were unable to recover it. 00:27:14.748 [2024-11-17 14:37:03.833954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.748 [2024-11-17 14:37:03.833989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.748 qpair failed and we were unable to recover it. 00:27:14.748 [2024-11-17 14:37:03.834172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.748 [2024-11-17 14:37:03.834205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.748 qpair failed and we were unable to recover it. 00:27:14.748 [2024-11-17 14:37:03.834327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.748 [2024-11-17 14:37:03.834371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.748 qpair failed and we were unable to recover it. 00:27:14.748 [2024-11-17 14:37:03.834575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.748 [2024-11-17 14:37:03.834610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.748 qpair failed and we were unable to recover it. 00:27:14.748 [2024-11-17 14:37:03.834908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.748 [2024-11-17 14:37:03.834943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.748 qpair failed and we were unable to recover it. 00:27:14.748 [2024-11-17 14:37:03.835127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.748 [2024-11-17 14:37:03.835160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.748 qpair failed and we were unable to recover it. 00:27:14.748 [2024-11-17 14:37:03.835432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.748 [2024-11-17 14:37:03.835471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.748 qpair failed and we were unable to recover it. 00:27:14.748 [2024-11-17 14:37:03.835758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.748 [2024-11-17 14:37:03.835794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.748 qpair failed and we were unable to recover it. 00:27:14.748 [2024-11-17 14:37:03.835906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.748 [2024-11-17 14:37:03.835941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.748 qpair failed and we were unable to recover it. 00:27:14.748 [2024-11-17 14:37:03.836199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.748 [2024-11-17 14:37:03.836233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.748 qpair failed and we were unable to recover it. 00:27:14.748 [2024-11-17 14:37:03.836495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.748 [2024-11-17 14:37:03.836530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.748 qpair failed and we were unable to recover it. 00:27:14.748 [2024-11-17 14:37:03.836724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.748 [2024-11-17 14:37:03.836757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.748 qpair failed and we were unable to recover it. 00:27:14.749 [2024-11-17 14:37:03.836881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.749 [2024-11-17 14:37:03.836916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.749 qpair failed and we were unable to recover it. 00:27:14.749 [2024-11-17 14:37:03.837170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.749 [2024-11-17 14:37:03.837205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.749 qpair failed and we were unable to recover it. 00:27:14.749 [2024-11-17 14:37:03.837393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.749 [2024-11-17 14:37:03.837430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.749 qpair failed and we were unable to recover it. 00:27:14.749 [2024-11-17 14:37:03.837635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.749 [2024-11-17 14:37:03.837671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.749 qpair failed and we were unable to recover it. 00:27:14.749 [2024-11-17 14:37:03.837782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.749 [2024-11-17 14:37:03.837819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.749 qpair failed and we were unable to recover it. 00:27:14.749 [2024-11-17 14:37:03.838099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.749 [2024-11-17 14:37:03.838133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.749 qpair failed and we were unable to recover it. 00:27:14.749 [2024-11-17 14:37:03.838326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.749 [2024-11-17 14:37:03.838373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.749 qpair failed and we were unable to recover it. 00:27:14.749 [2024-11-17 14:37:03.838561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.749 [2024-11-17 14:37:03.838594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.749 qpair failed and we were unable to recover it. 00:27:14.749 [2024-11-17 14:37:03.838891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.749 [2024-11-17 14:37:03.838925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.749 qpair failed and we were unable to recover it. 00:27:14.749 [2024-11-17 14:37:03.839198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.749 [2024-11-17 14:37:03.839234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.749 qpair failed and we were unable to recover it. 00:27:14.749 [2024-11-17 14:37:03.839494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.749 [2024-11-17 14:37:03.839529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.749 qpair failed and we were unable to recover it. 00:27:14.749 [2024-11-17 14:37:03.839655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.749 [2024-11-17 14:37:03.839690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.749 qpair failed and we were unable to recover it. 00:27:14.749 [2024-11-17 14:37:03.839876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.749 [2024-11-17 14:37:03.839910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.749 qpair failed and we were unable to recover it. 00:27:14.749 [2024-11-17 14:37:03.840093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.749 [2024-11-17 14:37:03.840128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.749 qpair failed and we were unable to recover it. 00:27:14.749 [2024-11-17 14:37:03.840408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.749 [2024-11-17 14:37:03.840443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.749 qpair failed and we were unable to recover it. 00:27:14.749 [2024-11-17 14:37:03.840667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.749 [2024-11-17 14:37:03.840700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.749 qpair failed and we were unable to recover it. 00:27:14.749 [2024-11-17 14:37:03.840887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.749 [2024-11-17 14:37:03.840920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.749 qpair failed and we were unable to recover it. 00:27:14.749 [2024-11-17 14:37:03.841184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.749 [2024-11-17 14:37:03.841219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.750 qpair failed and we were unable to recover it. 00:27:14.750 [2024-11-17 14:37:03.841497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.750 [2024-11-17 14:37:03.841532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.750 qpair failed and we were unable to recover it. 00:27:14.750 [2024-11-17 14:37:03.841664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.750 [2024-11-17 14:37:03.841698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.750 qpair failed and we were unable to recover it. 00:27:14.750 [2024-11-17 14:37:03.841809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.750 [2024-11-17 14:37:03.841843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.750 qpair failed and we were unable to recover it. 00:27:14.750 [2024-11-17 14:37:03.842095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.750 [2024-11-17 14:37:03.842129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.750 qpair failed and we were unable to recover it. 00:27:14.750 [2024-11-17 14:37:03.842310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.750 [2024-11-17 14:37:03.842343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.750 qpair failed and we were unable to recover it. 00:27:14.750 [2024-11-17 14:37:03.842556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.750 [2024-11-17 14:37:03.842592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.750 qpair failed and we were unable to recover it. 00:27:14.750 [2024-11-17 14:37:03.842773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.750 [2024-11-17 14:37:03.842806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.750 qpair failed and we were unable to recover it. 00:27:14.750 [2024-11-17 14:37:03.843083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.750 [2024-11-17 14:37:03.843116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.750 qpair failed and we were unable to recover it. 00:27:14.750 [2024-11-17 14:37:03.843336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.750 [2024-11-17 14:37:03.843388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.750 qpair failed and we were unable to recover it. 00:27:14.750 [2024-11-17 14:37:03.843592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.750 [2024-11-17 14:37:03.843629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.750 qpair failed and we were unable to recover it. 00:27:14.750 [2024-11-17 14:37:03.843889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.750 [2024-11-17 14:37:03.843923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.750 qpair failed and we were unable to recover it. 00:27:14.750 [2024-11-17 14:37:03.844047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.750 [2024-11-17 14:37:03.844081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.750 qpair failed and we were unable to recover it. 00:27:14.750 [2024-11-17 14:37:03.844204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.750 [2024-11-17 14:37:03.844240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.750 qpair failed and we were unable to recover it. 00:27:14.750 [2024-11-17 14:37:03.844541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.750 [2024-11-17 14:37:03.844578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.750 qpair failed and we were unable to recover it. 00:27:14.750 [2024-11-17 14:37:03.844862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.750 [2024-11-17 14:37:03.844895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.750 qpair failed and we were unable to recover it. 00:27:14.750 [2024-11-17 14:37:03.845092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.750 [2024-11-17 14:37:03.845127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.750 qpair failed and we were unable to recover it. 00:27:14.750 [2024-11-17 14:37:03.845403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.750 [2024-11-17 14:37:03.845437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.750 qpair failed and we were unable to recover it. 00:27:14.751 [2024-11-17 14:37:03.845694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.751 [2024-11-17 14:37:03.845728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.751 qpair failed and we were unable to recover it. 00:27:14.751 [2024-11-17 14:37:03.845974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.751 [2024-11-17 14:37:03.846008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.751 qpair failed and we were unable to recover it. 00:27:14.751 [2024-11-17 14:37:03.846151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.751 [2024-11-17 14:37:03.846184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.751 qpair failed and we were unable to recover it. 00:27:14.751 [2024-11-17 14:37:03.846374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.751 [2024-11-17 14:37:03.846409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.751 qpair failed and we were unable to recover it. 00:27:14.751 [2024-11-17 14:37:03.846693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.751 [2024-11-17 14:37:03.846726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.751 qpair failed and we were unable to recover it. 00:27:14.751 [2024-11-17 14:37:03.847000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.751 [2024-11-17 14:37:03.847033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.751 qpair failed and we were unable to recover it. 00:27:14.751 [2024-11-17 14:37:03.847247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.751 [2024-11-17 14:37:03.847283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.751 qpair failed and we were unable to recover it. 00:27:14.751 [2024-11-17 14:37:03.847557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.751 [2024-11-17 14:37:03.847593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.751 qpair failed and we were unable to recover it. 00:27:14.751 [2024-11-17 14:37:03.847723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.751 [2024-11-17 14:37:03.847756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.751 qpair failed and we were unable to recover it. 00:27:14.751 [2024-11-17 14:37:03.848033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.751 [2024-11-17 14:37:03.848066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.751 qpair failed and we were unable to recover it. 00:27:14.751 [2024-11-17 14:37:03.848260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.751 [2024-11-17 14:37:03.848293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.751 qpair failed and we were unable to recover it. 00:27:14.751 [2024-11-17 14:37:03.848508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.751 [2024-11-17 14:37:03.848542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.751 qpair failed and we were unable to recover it. 00:27:14.751 [2024-11-17 14:37:03.848849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.751 [2024-11-17 14:37:03.848882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.751 qpair failed and we were unable to recover it. 00:27:14.751 [2024-11-17 14:37:03.849125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.751 [2024-11-17 14:37:03.849160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.751 qpair failed and we were unable to recover it. 00:27:14.751 [2024-11-17 14:37:03.849343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.751 [2024-11-17 14:37:03.849386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.751 qpair failed and we were unable to recover it. 00:27:14.751 [2024-11-17 14:37:03.849518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.751 [2024-11-17 14:37:03.849550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.752 qpair failed and we were unable to recover it. 00:27:14.752 [2024-11-17 14:37:03.849758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.752 [2024-11-17 14:37:03.849794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.752 qpair failed and we were unable to recover it. 00:27:14.752 [2024-11-17 14:37:03.849989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.752 [2024-11-17 14:37:03.850024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.752 qpair failed and we were unable to recover it. 00:27:14.752 [2024-11-17 14:37:03.850327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.752 [2024-11-17 14:37:03.850371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.752 qpair failed and we were unable to recover it. 00:27:14.752 [2024-11-17 14:37:03.850651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.752 [2024-11-17 14:37:03.850687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.752 qpair failed and we were unable to recover it. 00:27:14.752 [2024-11-17 14:37:03.850895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.752 [2024-11-17 14:37:03.850929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.752 qpair failed and we were unable to recover it. 00:27:14.752 [2024-11-17 14:37:03.851219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.752 [2024-11-17 14:37:03.851252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.752 qpair failed and we were unable to recover it. 00:27:14.752 [2024-11-17 14:37:03.851401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.752 [2024-11-17 14:37:03.851437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.752 qpair failed and we were unable to recover it. 00:27:14.752 [2024-11-17 14:37:03.851664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.752 [2024-11-17 14:37:03.851699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.752 qpair failed and we were unable to recover it. 00:27:14.752 [2024-11-17 14:37:03.851904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.752 [2024-11-17 14:37:03.851938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.752 qpair failed and we were unable to recover it. 00:27:14.752 [2024-11-17 14:37:03.852122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.752 [2024-11-17 14:37:03.852154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.752 qpair failed and we were unable to recover it. 00:27:14.752 [2024-11-17 14:37:03.852383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.752 [2024-11-17 14:37:03.852418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.752 qpair failed and we were unable to recover it. 00:27:14.752 [2024-11-17 14:37:03.852692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.752 [2024-11-17 14:37:03.852726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.752 qpair failed and we were unable to recover it. 00:27:14.752 [2024-11-17 14:37:03.852839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.752 [2024-11-17 14:37:03.852874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.752 qpair failed and we were unable to recover it. 00:27:14.752 [2024-11-17 14:37:03.853093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.752 [2024-11-17 14:37:03.853129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.752 qpair failed and we were unable to recover it. 00:27:14.752 [2024-11-17 14:37:03.853421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.752 [2024-11-17 14:37:03.853457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.752 qpair failed and we were unable to recover it. 00:27:14.752 [2024-11-17 14:37:03.853750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.752 [2024-11-17 14:37:03.853789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.752 qpair failed and we were unable to recover it. 00:27:14.752 [2024-11-17 14:37:03.853985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.752 [2024-11-17 14:37:03.854020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.752 qpair failed and we were unable to recover it. 00:27:14.752 [2024-11-17 14:37:03.854208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-17 14:37:03.854242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.753 qpair failed and we were unable to recover it. 00:27:14.753 [2024-11-17 14:37:03.854459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-17 14:37:03.854494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.753 qpair failed and we were unable to recover it. 00:27:14.753 [2024-11-17 14:37:03.854681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-17 14:37:03.854715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.753 qpair failed and we were unable to recover it. 00:27:14.753 [2024-11-17 14:37:03.854898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-17 14:37:03.854933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.753 qpair failed and we were unable to recover it. 00:27:14.753 [2024-11-17 14:37:03.855146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-17 14:37:03.855180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.753 qpair failed and we were unable to recover it. 00:27:14.753 [2024-11-17 14:37:03.855433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-17 14:37:03.855469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.753 qpair failed and we were unable to recover it. 00:27:14.753 [2024-11-17 14:37:03.855583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-17 14:37:03.855618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.753 qpair failed and we were unable to recover it. 00:27:14.753 [2024-11-17 14:37:03.855812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-17 14:37:03.855847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.753 qpair failed and we were unable to recover it. 00:27:14.753 [2024-11-17 14:37:03.856122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-17 14:37:03.856156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.753 qpair failed and we were unable to recover it. 00:27:14.753 [2024-11-17 14:37:03.856371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-17 14:37:03.856407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.753 qpair failed and we were unable to recover it. 00:27:14.753 [2024-11-17 14:37:03.856605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-17 14:37:03.856642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.753 qpair failed and we were unable to recover it. 00:27:14.753 [2024-11-17 14:37:03.856945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-17 14:37:03.856980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.753 qpair failed and we were unable to recover it. 00:27:14.753 [2024-11-17 14:37:03.857184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-17 14:37:03.857217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.753 qpair failed and we were unable to recover it. 00:27:14.753 [2024-11-17 14:37:03.857402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-17 14:37:03.857439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.753 qpair failed and we were unable to recover it. 00:27:14.753 [2024-11-17 14:37:03.857618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-17 14:37:03.857652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.753 qpair failed and we were unable to recover it. 00:27:14.753 [2024-11-17 14:37:03.857914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-17 14:37:03.857952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.753 qpair failed and we were unable to recover it. 00:27:14.753 [2024-11-17 14:37:03.858150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-17 14:37:03.858183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.753 qpair failed and we were unable to recover it. 00:27:14.753 [2024-11-17 14:37:03.858375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-17 14:37:03.858410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.753 qpair failed and we were unable to recover it. 00:27:14.753 [2024-11-17 14:37:03.858594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-17 14:37:03.858628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.754 qpair failed and we were unable to recover it. 00:27:14.754 [2024-11-17 14:37:03.858815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-17 14:37:03.858851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.754 qpair failed and we were unable to recover it. 00:27:14.754 [2024-11-17 14:37:03.859101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-17 14:37:03.859134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.754 qpair failed and we were unable to recover it. 00:27:14.754 [2024-11-17 14:37:03.859331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-17 14:37:03.859404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.754 qpair failed and we were unable to recover it. 00:27:14.754 [2024-11-17 14:37:03.859535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-17 14:37:03.859571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.754 qpair failed and we were unable to recover it. 00:27:14.754 [2024-11-17 14:37:03.859695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-17 14:37:03.859728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.754 qpair failed and we were unable to recover it. 00:27:14.754 [2024-11-17 14:37:03.860030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-17 14:37:03.860063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.754 qpair failed and we were unable to recover it. 00:27:14.754 [2024-11-17 14:37:03.860368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-17 14:37:03.860405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.754 qpair failed and we were unable to recover it. 00:27:14.754 [2024-11-17 14:37:03.860608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-17 14:37:03.860641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.754 qpair failed and we were unable to recover it. 00:27:14.754 [2024-11-17 14:37:03.860928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-17 14:37:03.860963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.754 qpair failed and we were unable to recover it. 00:27:14.754 [2024-11-17 14:37:03.861176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-17 14:37:03.861210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.754 qpair failed and we were unable to recover it. 00:27:14.754 [2024-11-17 14:37:03.861434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-17 14:37:03.861469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.754 qpair failed and we were unable to recover it. 00:27:14.754 [2024-11-17 14:37:03.861727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-17 14:37:03.861761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.754 qpair failed and we were unable to recover it. 00:27:14.754 [2024-11-17 14:37:03.862051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-17 14:37:03.862085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.754 qpair failed and we were unable to recover it. 00:27:14.754 [2024-11-17 14:37:03.862366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-17 14:37:03.862402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.754 qpair failed and we were unable to recover it. 00:27:14.754 [2024-11-17 14:37:03.862591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-17 14:37:03.862625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.754 qpair failed and we were unable to recover it. 00:27:14.754 [2024-11-17 14:37:03.862895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-17 14:37:03.862929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.754 qpair failed and we were unable to recover it. 00:27:14.754 [2024-11-17 14:37:03.863126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-17 14:37:03.863162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.754 qpair failed and we were unable to recover it. 00:27:14.754 [2024-11-17 14:37:03.863426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-17 14:37:03.863462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.755 qpair failed and we were unable to recover it. 00:27:14.755 [2024-11-17 14:37:03.863664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-17 14:37:03.863698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.755 qpair failed and we were unable to recover it. 00:27:14.755 [2024-11-17 14:37:03.863964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-17 14:37:03.864004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.755 qpair failed and we were unable to recover it. 00:27:14.755 [2024-11-17 14:37:03.864232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-17 14:37:03.864266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.755 qpair failed and we were unable to recover it. 00:27:14.755 [2024-11-17 14:37:03.864483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-17 14:37:03.864518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.755 qpair failed and we were unable to recover it. 00:27:14.755 [2024-11-17 14:37:03.864719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-17 14:37:03.864753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.755 qpair failed and we were unable to recover it. 00:27:14.755 [2024-11-17 14:37:03.864883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-17 14:37:03.864918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.755 qpair failed and we were unable to recover it. 00:27:14.755 [2024-11-17 14:37:03.865132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-17 14:37:03.865167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.755 qpair failed and we were unable to recover it. 00:27:14.755 [2024-11-17 14:37:03.865490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-17 14:37:03.865525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.755 qpair failed and we were unable to recover it. 00:27:14.755 [2024-11-17 14:37:03.865710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-17 14:37:03.865747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.755 qpair failed and we were unable to recover it. 00:27:14.755 [2024-11-17 14:37:03.865921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-17 14:37:03.865957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.755 qpair failed and we were unable to recover it. 00:27:14.755 [2024-11-17 14:37:03.866144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-17 14:37:03.866179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.755 qpair failed and we were unable to recover it. 00:27:14.755 [2024-11-17 14:37:03.866304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-17 14:37:03.866338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.755 qpair failed and we were unable to recover it. 00:27:14.755 [2024-11-17 14:37:03.866546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-17 14:37:03.866581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.755 qpair failed and we were unable to recover it. 00:27:14.755 [2024-11-17 14:37:03.866857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-17 14:37:03.866891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.755 qpair failed and we were unable to recover it. 00:27:14.755 [2024-11-17 14:37:03.867101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-17 14:37:03.867137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.755 qpair failed and we were unable to recover it. 00:27:14.755 [2024-11-17 14:37:03.867328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-17 14:37:03.867372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.755 qpair failed and we were unable to recover it. 00:27:14.755 [2024-11-17 14:37:03.867563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.756 [2024-11-17 14:37:03.867595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.756 qpair failed and we were unable to recover it. 00:27:14.756 [2024-11-17 14:37:03.867776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.756 [2024-11-17 14:37:03.867810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.756 qpair failed and we were unable to recover it. 00:27:14.756 [2024-11-17 14:37:03.868081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.756 [2024-11-17 14:37:03.868116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.756 qpair failed and we were unable to recover it. 00:27:14.756 [2024-11-17 14:37:03.868315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.756 [2024-11-17 14:37:03.868349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.756 qpair failed and we were unable to recover it. 00:27:14.756 [2024-11-17 14:37:03.868562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.756 [2024-11-17 14:37:03.868598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.756 qpair failed and we were unable to recover it. 00:27:14.756 [2024-11-17 14:37:03.868806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.756 [2024-11-17 14:37:03.868841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.756 qpair failed and we were unable to recover it. 00:27:14.756 [2024-11-17 14:37:03.869065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.756 [2024-11-17 14:37:03.869099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.756 qpair failed and we were unable to recover it. 00:27:14.756 [2024-11-17 14:37:03.869301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.756 [2024-11-17 14:37:03.869335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.756 qpair failed and we were unable to recover it. 00:27:14.756 [2024-11-17 14:37:03.869543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.756 [2024-11-17 14:37:03.869577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.756 qpair failed and we were unable to recover it. 00:27:14.756 [2024-11-17 14:37:03.869783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.756 [2024-11-17 14:37:03.869817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.756 qpair failed and we were unable to recover it. 00:27:14.756 [2024-11-17 14:37:03.870009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.756 [2024-11-17 14:37:03.870044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.756 qpair failed and we were unable to recover it. 00:27:14.756 [2024-11-17 14:37:03.870344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.756 [2024-11-17 14:37:03.870391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.756 qpair failed and we were unable to recover it. 00:27:14.756 [2024-11-17 14:37:03.870608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.756 [2024-11-17 14:37:03.870644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.756 qpair failed and we were unable to recover it. 00:27:14.756 [2024-11-17 14:37:03.870866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.756 [2024-11-17 14:37:03.870900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.756 qpair failed and we were unable to recover it. 00:27:14.756 [2024-11-17 14:37:03.871026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.756 [2024-11-17 14:37:03.871060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.756 qpair failed and we were unable to recover it. 00:27:14.756 [2024-11-17 14:37:03.871171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.756 [2024-11-17 14:37:03.871206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.756 qpair failed and we were unable to recover it. 00:27:14.756 [2024-11-17 14:37:03.871491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.756 [2024-11-17 14:37:03.871527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.756 qpair failed and we were unable to recover it. 00:27:14.756 [2024-11-17 14:37:03.871709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.756 [2024-11-17 14:37:03.871742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.756 qpair failed and we were unable to recover it. 00:27:14.756 [2024-11-17 14:37:03.871888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.756 [2024-11-17 14:37:03.871921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.756 qpair failed and we were unable to recover it. 00:27:14.756 [2024-11-17 14:37:03.872069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.756 [2024-11-17 14:37:03.872103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.756 qpair failed and we were unable to recover it. 00:27:14.756 [2024-11-17 14:37:03.872249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.757 [2024-11-17 14:37:03.872284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.757 qpair failed and we were unable to recover it. 00:27:14.757 [2024-11-17 14:37:03.872470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.757 [2024-11-17 14:37:03.872506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.757 qpair failed and we were unable to recover it. 00:27:14.757 [2024-11-17 14:37:03.872656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.757 [2024-11-17 14:37:03.872692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.757 qpair failed and we were unable to recover it. 00:27:14.757 [2024-11-17 14:37:03.872877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.757 [2024-11-17 14:37:03.872909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.757 qpair failed and we were unable to recover it. 00:27:14.757 [2024-11-17 14:37:03.873095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.757 [2024-11-17 14:37:03.873129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.757 qpair failed and we were unable to recover it. 00:27:14.757 [2024-11-17 14:37:03.873337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.757 [2024-11-17 14:37:03.873397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.757 qpair failed and we were unable to recover it. 00:27:14.757 [2024-11-17 14:37:03.873600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.757 [2024-11-17 14:37:03.873634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.757 qpair failed and we were unable to recover it. 00:27:14.757 [2024-11-17 14:37:03.873900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.757 [2024-11-17 14:37:03.873934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.757 qpair failed and we were unable to recover it. 00:27:14.757 [2024-11-17 14:37:03.874209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.757 [2024-11-17 14:37:03.874244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.757 qpair failed and we were unable to recover it. 00:27:14.757 [2024-11-17 14:37:03.874465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.757 [2024-11-17 14:37:03.874500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.757 qpair failed and we were unable to recover it. 00:27:14.757 [2024-11-17 14:37:03.874757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.757 [2024-11-17 14:37:03.874792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.757 qpair failed and we were unable to recover it. 00:27:14.757 [2024-11-17 14:37:03.874977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.757 [2024-11-17 14:37:03.875010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.757 qpair failed and we were unable to recover it. 00:27:14.757 [2024-11-17 14:37:03.875205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.757 [2024-11-17 14:37:03.875238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.757 qpair failed and we were unable to recover it. 00:27:14.757 [2024-11-17 14:37:03.875434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.757 [2024-11-17 14:37:03.875470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.757 qpair failed and we were unable to recover it. 00:27:14.757 [2024-11-17 14:37:03.875584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.757 [2024-11-17 14:37:03.875618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.758 qpair failed and we were unable to recover it. 00:27:14.758 [2024-11-17 14:37:03.875872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.758 [2024-11-17 14:37:03.875905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.758 qpair failed and we were unable to recover it. 00:27:14.758 [2024-11-17 14:37:03.876123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.758 [2024-11-17 14:37:03.876157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.758 qpair failed and we were unable to recover it. 00:27:14.758 [2024-11-17 14:37:03.876344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.758 [2024-11-17 14:37:03.876386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.758 qpair failed and we were unable to recover it. 00:27:14.758 [2024-11-17 14:37:03.876537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.758 [2024-11-17 14:37:03.876572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.758 qpair failed and we were unable to recover it. 00:27:14.758 [2024-11-17 14:37:03.876761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.758 [2024-11-17 14:37:03.876795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.758 qpair failed and we were unable to recover it. 00:27:14.758 [2024-11-17 14:37:03.876920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.758 [2024-11-17 14:37:03.876954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.758 qpair failed and we were unable to recover it. 00:27:14.758 [2024-11-17 14:37:03.877185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.758 [2024-11-17 14:37:03.877218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.758 qpair failed and we were unable to recover it. 00:27:14.758 [2024-11-17 14:37:03.877427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.758 [2024-11-17 14:37:03.877462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.758 qpair failed and we were unable to recover it. 00:27:14.758 [2024-11-17 14:37:03.877746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.758 [2024-11-17 14:37:03.877779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.758 qpair failed and we were unable to recover it. 00:27:14.758 [2024-11-17 14:37:03.877968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.758 [2024-11-17 14:37:03.878002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.758 qpair failed and we were unable to recover it. 00:27:14.758 [2024-11-17 14:37:03.878205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.758 [2024-11-17 14:37:03.878239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.758 qpair failed and we were unable to recover it. 00:27:14.758 [2024-11-17 14:37:03.878424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.758 [2024-11-17 14:37:03.878460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.758 qpair failed and we were unable to recover it. 00:27:14.758 [2024-11-17 14:37:03.878641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.758 [2024-11-17 14:37:03.878675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.758 qpair failed and we were unable to recover it. 00:27:14.758 [2024-11-17 14:37:03.878857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.758 [2024-11-17 14:37:03.878890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.758 qpair failed and we were unable to recover it. 00:27:14.758 [2024-11-17 14:37:03.879072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.758 [2024-11-17 14:37:03.879106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.758 qpair failed and we were unable to recover it. 00:27:14.758 [2024-11-17 14:37:03.879238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.758 [2024-11-17 14:37:03.879273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.758 qpair failed and we were unable to recover it. 00:27:14.758 [2024-11-17 14:37:03.879479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.758 [2024-11-17 14:37:03.879515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.758 qpair failed and we were unable to recover it. 00:27:14.758 [2024-11-17 14:37:03.879826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.758 [2024-11-17 14:37:03.879859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.758 qpair failed and we were unable to recover it. 00:27:14.758 [2024-11-17 14:37:03.879975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.758 [2024-11-17 14:37:03.880008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.758 qpair failed and we were unable to recover it. 00:27:14.758 [2024-11-17 14:37:03.880261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.880293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.759 [2024-11-17 14:37:03.880491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.880526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.759 [2024-11-17 14:37:03.880805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.880839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.759 [2024-11-17 14:37:03.881040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.881073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.759 [2024-11-17 14:37:03.881287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.881320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.759 [2024-11-17 14:37:03.881523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.881559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.759 [2024-11-17 14:37:03.881764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.881797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.759 [2024-11-17 14:37:03.881999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.882032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.759 [2024-11-17 14:37:03.882213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.882248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.759 [2024-11-17 14:37:03.882501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.882535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.759 [2024-11-17 14:37:03.882814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.882847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.759 [2024-11-17 14:37:03.883130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.883168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.759 [2024-11-17 14:37:03.883370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.883407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.759 [2024-11-17 14:37:03.883682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.883715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.759 [2024-11-17 14:37:03.883916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.883949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.759 [2024-11-17 14:37:03.884208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.884241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.759 [2024-11-17 14:37:03.884464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.884499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.759 [2024-11-17 14:37:03.884684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.884718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.759 [2024-11-17 14:37:03.884983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.885017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.759 [2024-11-17 14:37:03.885270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.885303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.759 [2024-11-17 14:37:03.885448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.885483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.759 [2024-11-17 14:37:03.885735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.759 [2024-11-17 14:37:03.885767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.759 qpair failed and we were unable to recover it. 00:27:14.760 [2024-11-17 14:37:03.885969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.760 [2024-11-17 14:37:03.886003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.760 qpair failed and we were unable to recover it. 00:27:14.760 [2024-11-17 14:37:03.886279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.760 [2024-11-17 14:37:03.886313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.760 qpair failed and we were unable to recover it. 00:27:14.760 [2024-11-17 14:37:03.886602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.760 [2024-11-17 14:37:03.886637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.760 qpair failed and we were unable to recover it. 00:27:14.760 [2024-11-17 14:37:03.886912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.760 [2024-11-17 14:37:03.886946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.760 qpair failed and we were unable to recover it. 00:27:14.760 [2024-11-17 14:37:03.887205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.760 [2024-11-17 14:37:03.887238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.760 qpair failed and we were unable to recover it. 00:27:14.760 [2024-11-17 14:37:03.887424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.760 [2024-11-17 14:37:03.887458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.760 qpair failed and we were unable to recover it. 00:27:14.760 [2024-11-17 14:37:03.887709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.760 [2024-11-17 14:37:03.887742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.760 qpair failed and we were unable to recover it. 00:27:14.760 [2024-11-17 14:37:03.888037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.760 [2024-11-17 14:37:03.888070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.760 qpair failed and we were unable to recover it. 00:27:14.760 [2024-11-17 14:37:03.888277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.760 [2024-11-17 14:37:03.888310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.760 qpair failed and we were unable to recover it. 00:27:14.760 [2024-11-17 14:37:03.888616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.760 [2024-11-17 14:37:03.888652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.760 qpair failed and we were unable to recover it. 00:27:14.760 [2024-11-17 14:37:03.888765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.760 [2024-11-17 14:37:03.888798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.760 qpair failed and we were unable to recover it. 00:27:14.760 [2024-11-17 14:37:03.889020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.760 [2024-11-17 14:37:03.889052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.760 qpair failed and we were unable to recover it. 00:27:14.760 [2024-11-17 14:37:03.889239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.760 [2024-11-17 14:37:03.889272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.760 qpair failed and we were unable to recover it. 00:27:14.760 [2024-11-17 14:37:03.889452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.760 [2024-11-17 14:37:03.889487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.760 qpair failed and we were unable to recover it. 00:27:14.760 [2024-11-17 14:37:03.889696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.760 [2024-11-17 14:37:03.889729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.760 qpair failed and we were unable to recover it. 00:27:14.760 [2024-11-17 14:37:03.890010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.760 [2024-11-17 14:37:03.890043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.760 qpair failed and we were unable to recover it. 00:27:14.760 [2024-11-17 14:37:03.890239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.760 [2024-11-17 14:37:03.890273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.760 qpair failed and we were unable to recover it. 00:27:14.760 [2024-11-17 14:37:03.890546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.760 [2024-11-17 14:37:03.890581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.760 qpair failed and we were unable to recover it. 00:27:14.760 [2024-11-17 14:37:03.890704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.760 [2024-11-17 14:37:03.890738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.760 qpair failed and we were unable to recover it. 00:27:14.760 [2024-11-17 14:37:03.890988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.760 [2024-11-17 14:37:03.891022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.760 qpair failed and we were unable to recover it. 00:27:14.760 [2024-11-17 14:37:03.891272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.760 [2024-11-17 14:37:03.891306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.761 qpair failed and we were unable to recover it. 00:27:14.761 [2024-11-17 14:37:03.891614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.761 [2024-11-17 14:37:03.891649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.761 qpair failed and we were unable to recover it. 00:27:14.761 [2024-11-17 14:37:03.891949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.761 [2024-11-17 14:37:03.891983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.761 qpair failed and we were unable to recover it. 00:27:14.761 [2024-11-17 14:37:03.892244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.761 [2024-11-17 14:37:03.892277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.761 qpair failed and we were unable to recover it. 00:27:14.761 [2024-11-17 14:37:03.892522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.761 [2024-11-17 14:37:03.892558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.761 qpair failed and we were unable to recover it. 00:27:14.761 [2024-11-17 14:37:03.892809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.761 [2024-11-17 14:37:03.892842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.761 qpair failed and we were unable to recover it. 00:27:14.761 [2024-11-17 14:37:03.892993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.761 [2024-11-17 14:37:03.893027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.761 qpair failed and we were unable to recover it. 00:27:14.761 [2024-11-17 14:37:03.893303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.761 [2024-11-17 14:37:03.893336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.761 qpair failed and we were unable to recover it. 00:27:14.761 [2024-11-17 14:37:03.893632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.761 [2024-11-17 14:37:03.893666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.761 qpair failed and we were unable to recover it. 00:27:14.761 [2024-11-17 14:37:03.893918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.761 [2024-11-17 14:37:03.893957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.761 qpair failed and we were unable to recover it. 00:27:14.761 [2024-11-17 14:37:03.894206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.761 [2024-11-17 14:37:03.894239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.761 qpair failed and we were unable to recover it. 00:27:14.761 [2024-11-17 14:37:03.894433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.761 [2024-11-17 14:37:03.894468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:14.761 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.894652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.894685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.894866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.894900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.895087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.895122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.895403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.895437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.895648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.895681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.895933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.895966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.896194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.896228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.896496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.896531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.896748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.896781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.897019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.897052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.897164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.897196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.897409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.897444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.897666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.897699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.897921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.897954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.898193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.898227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.898495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.898531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.898786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.898820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.899122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.899155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.899380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.899414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.899672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.899705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.899921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.899955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.900151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.900185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.900386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.900420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.900625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.900659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.900865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.900900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.901081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.901114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.901293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.901326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.901617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.901652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.901851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.901884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.902139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.902172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.902424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.902460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.902746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.902779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.053 [2024-11-17 14:37:03.903076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.053 [2024-11-17 14:37:03.903109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.053 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.903379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.903414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.903670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.903703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.903984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.904017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.904269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.904303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.904599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.904639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.904839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.904872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.905070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.905103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.905304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.905337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.905544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.905579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.905853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.905886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.906163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.906196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.906431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.906467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.906655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.906687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.906938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.906972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.907165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.907198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.907391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.907425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.907607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.907640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.907892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.907926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.908207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.908241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.908424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.908459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.908730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.908763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.909033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.909066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.909196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.909229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.909484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.909519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.909727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.909760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.910030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.910063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.910290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.910323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.910462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.910496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.910797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.910830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.911115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.911148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.911427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.911462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.911721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.911754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.911965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.911999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.912265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.912299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.912501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.912535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.912673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.912706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.912979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.054 [2024-11-17 14:37:03.913012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.054 qpair failed and we were unable to recover it. 00:27:15.054 [2024-11-17 14:37:03.913213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.913246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.913427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.913461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.913713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.913746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.914020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.914053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.914234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.914267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.914543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.914579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.914712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.914745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.914996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.915034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.915254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.915289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.915563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.915597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.915822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.915855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.916107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.916140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.916338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.916383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.916566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.916599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.916783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.916816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.917070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.917103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.917299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.917333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.917612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.917647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.917927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.917959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.918242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.918276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.918411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.918448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.918660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.918693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.919005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.919038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.919225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.919260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.919454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.919489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.919741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.919775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.920076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.920110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.920374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.920409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.920680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.920714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.920912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.920945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.921144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.921177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.921436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.921471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.921651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.921683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.921952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.921986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.922200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.922233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.922376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.922411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.922662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.922695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.055 [2024-11-17 14:37:03.922991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.055 [2024-11-17 14:37:03.923024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.055 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.923208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.923241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.923504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.923540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.923735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.923768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.924035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.924068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.924334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.924378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.924508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.924541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.924744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.924777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.924972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.925004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.925199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.925231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.925508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.925547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.925854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.925887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.926164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.926197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.926473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.926530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.926799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.926830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.927053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.927086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.927276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.927308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.927573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.927609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.927909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.927943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.928208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.928241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.928483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.928517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.928713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.928745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.929023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.929056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.929250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.929283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.929504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.929539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.929794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.929828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.930103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.930135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.930363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.930398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.930652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.930687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.930825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.930858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.931138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.931171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.931453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.931488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.931765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.931798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.932029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.932062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.932283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.932316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.932541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.932576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.932732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.932766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.933131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.933212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.056 [2024-11-17 14:37:03.933466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.056 [2024-11-17 14:37:03.933507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.056 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.933772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.933807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.934086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.934119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.934320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.934365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.934626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.934660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.934838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.934871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.935121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.935155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.935348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.935391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.935664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.935696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.935978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.936011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.936244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.936278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.936479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.936514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.936715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.936757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.937032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.937067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.937207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.937240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.937434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.937468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.937664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.937697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.937820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.937854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.938090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.938124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.938340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.938388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.938600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.938634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.938888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.938921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.939130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.939164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.939427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.939463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.939604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.939638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.939825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.939858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.940143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.940177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.940304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.940338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.940546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.940580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.940778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.940812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.940990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.941023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.941254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.057 [2024-11-17 14:37:03.941288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.057 qpair failed and we were unable to recover it. 00:27:15.057 [2024-11-17 14:37:03.941492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.941529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.941729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.941763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.941943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.941975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.942194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.942228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.942477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.942511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.942771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.942805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.943053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.943089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.943393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.943428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.943567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.943600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.943865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.943900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.944196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.944230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.944365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.944401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.944529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.944562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.944766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.944799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.945113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.945147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.945422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.945458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.945650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.945684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.945965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.945999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.946196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.946230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.946412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.946448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.946699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.946732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.946993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.947028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.947145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.947180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.947454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.947489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.947750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.947783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.948059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.948095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.948290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.948323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.948610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.948644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.948921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.948957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.949079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.949113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.949235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.949271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.949413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.949451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.949674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.949708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.949839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.949873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.950177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.950213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.950470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.950506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.950690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.950723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.058 [2024-11-17 14:37:03.950949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.058 [2024-11-17 14:37:03.950985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.058 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.951178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.951213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.951416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.951452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.951642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.951676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.951792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.951825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.952012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.952047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.952175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.952210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.952413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.952449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.952659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.952694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.952951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.952987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.953261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.953302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.953579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.953615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.953816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.953851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.954037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.954071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.954275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.954308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.954502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.954538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.954814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.954849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.955072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.955106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.955290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.955324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.955601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.955635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.955834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.955867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.956085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.956119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.956418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.956455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.956719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.956751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.956972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.957005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.957282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.957316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.957625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.957661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.957866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.957901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.958105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.958139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.958390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.958426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.958610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.958643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.958826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.958860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.958994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.959030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.959243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.959277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.959397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.959435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.959619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.959653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.959784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.959818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.960100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.059 [2024-11-17 14:37:03.960134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.059 qpair failed and we were unable to recover it. 00:27:15.059 [2024-11-17 14:37:03.960318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.960380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.960613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.960647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.960904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.960938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.961178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.961212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.961464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.961501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.961757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.961790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.962007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.962041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.962333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.962375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.962647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.962681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.962905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.962938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.963195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.963230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.963506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.963540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.963735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.963774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.963909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.963942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.964164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.964198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.964476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.964512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.964708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.964742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.964922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.964956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.965138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.965172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.965302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.965335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.965615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.965648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.965881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.965917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.966144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.966178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.966426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.966459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.966718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.966752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.967051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.967085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.967379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.967416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.967616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.967651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.967794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.967829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.968091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.968125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.968402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.968437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.968551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.968587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.968724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.968757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.969038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.969071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.969310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.969345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.969612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.969647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.969951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.969986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.970175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.970208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.060 qpair failed and we were unable to recover it. 00:27:15.060 [2024-11-17 14:37:03.970425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.060 [2024-11-17 14:37:03.970461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.970653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.970688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.970866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.970901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.971152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.971185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.971407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.971444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.971697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.971732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.971936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.971970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.972171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.972204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.972415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.972451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.972632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.972665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.972797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.972832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.973110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.973145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.973398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.973435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.973629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.973663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.973938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.973984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.974261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.974296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.974491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.974526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.974658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.974694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.974883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.974917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.975113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.975147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.975473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.975509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.975640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.975676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.975929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.975962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.976191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.976224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.976438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.976474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.976606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.976641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.976909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.976942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.977197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.977230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.977452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.977489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.977695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.977728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.977985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.978019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.978224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.978258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.978391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.978426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.978609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.978643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.978925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.978959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.979168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.979204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.979393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.979430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.979609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.979644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.061 qpair failed and we were unable to recover it. 00:27:15.061 [2024-11-17 14:37:03.979831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.061 [2024-11-17 14:37:03.979864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.980089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.980122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.980315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.980350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.980590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.980623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.980876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.980910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.981161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.981195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.981339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.981397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.981588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.981622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.981850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.981884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.982134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.982169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.982452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.982487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.982701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.982735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.983012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.983046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.983266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.983301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.983600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.983636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.983753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.983786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.983975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.984016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.984143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.984178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.984507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.984543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.984747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.984781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.984910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.984944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.985202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.985235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1622454 Killed "${NVMF_APP[@]}" "$@" 00:27:15.062 [2024-11-17 14:37:03.985427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.985464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.985743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.985778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.986031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.986067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 14:37:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:15.062 [2024-11-17 14:37:03.986382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.986417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 14:37:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:15.062 [2024-11-17 14:37:03.986576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.986613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.986748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.986783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 14:37:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:15.062 [2024-11-17 14:37:03.987057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.987094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 14:37:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:15.062 [2024-11-17 14:37:03.987208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.987245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 [2024-11-17 14:37:03.987376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.987412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.062 14:37:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.062 [2024-11-17 14:37:03.987600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.062 [2024-11-17 14:37:03.987636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.062 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.987829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.987865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.988052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.988088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.988374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.988410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.988667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.988701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.989000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.989034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.989377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.989412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.989591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.989624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.989904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.989940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.990220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.990261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.990478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.990515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.990707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.990740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.991016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.991052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.991313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.991346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.991559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.991593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.991801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.991834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.992029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.992066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.992275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.992311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.992531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.992567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.992723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.992758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.992964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.992998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.993208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.993242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.993511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.993548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.993788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.993823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.994111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.994145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.994426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 14:37:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1623171 00:27:15.063 [2024-11-17 14:37:03.994462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.994684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.994719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 14:37:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1623171 00:27:15.063 14:37:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:15.063 [2024-11-17 14:37:03.994919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.994953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 14:37:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1623171 ']' 00:27:15.063 [2024-11-17 14:37:03.995210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.995245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 14:37:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.063 [2024-11-17 14:37:03.995518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.995553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.995737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.995772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 14:37:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 [2024-11-17 14:37:03.995973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.996007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 14:37:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.063 [2024-11-17 14:37:03.996287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.996325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 14:37:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:15.063 [2024-11-17 14:37:03.996535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.063 [2024-11-17 14:37:03.996569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.063 qpair failed and we were unable to recover it. 00:27:15.063 14:37:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.063 [2024-11-17 14:37:03.996848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:03.996882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:03.997143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:03.997177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:03.997417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:03.997454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:03.997596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:03.997631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:03.997912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:03.997943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:03.998089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:03.998122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:03.998302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:03.998339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:03.998530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:03.998565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:03.998821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:03.998853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:03.999126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:03.999162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:03.999417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:03.999452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:03.999745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:03.999781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:03.999919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:03.999953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.000230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.000263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.000544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.000580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.000775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.000808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.001057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.001091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.001293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.001327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.001628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.001662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.001803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.001837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.002040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.002074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.002251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.002285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.002480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.002515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.002706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.002740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.003000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.003034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.003285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.003319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.003461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.003496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.003754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.003787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.004022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.004056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.004204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.004237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.004490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.004526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.004671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.004705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.004966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.005000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.005277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.005312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.005434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.005469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.005748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.005782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.005986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.006019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.006276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.064 [2024-11-17 14:37:04.006316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.064 qpair failed and we were unable to recover it. 00:27:15.064 [2024-11-17 14:37:04.006609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.006642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.006934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.006968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.007166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.007199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.007405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.007440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.007712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.007746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.008001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.008036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.008229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.008262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.008461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.008496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.008697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.008731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.008850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.008885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.009192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.009225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.009405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.009440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.009692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.009725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.009916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.009951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.010230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.010263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.010483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.010519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.010743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.010776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.010959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.010993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.011256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.011290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.011530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.011565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.011787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.011821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.012016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.012051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.012230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.012264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.012453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.012489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.012693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.012728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.012866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.012900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.013016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.013051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.013323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.013367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.013646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.013680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.013982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.014015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.014235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.014268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.014545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.014581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.014795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.014828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.015118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.015152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.015339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.015382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.015665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.015699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.015904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.015938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.016123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.065 [2024-11-17 14:37:04.016156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.065 qpair failed and we were unable to recover it. 00:27:15.065 [2024-11-17 14:37:04.016407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.016442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.016695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.016736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.016933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.016966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.017237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.017271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.017555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.017590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.017809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.017844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.018180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.018214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.018425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.018460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.018643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.018677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.018881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.018915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.019135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.019169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.019290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.019324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.019689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.019770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.020070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.020112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.020423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.020461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.020752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.020788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.020981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.021015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.021275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.021309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.021540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.021574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.021801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.021834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.022133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.022165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.022433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.022469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.022602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.022636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.022906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.022940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.023143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.023176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.023315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.023348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.023645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.023678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.023883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.023916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.024046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.024080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.024261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.024294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.024586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.024620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.024754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.024787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.024925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.024957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.066 [2024-11-17 14:37:04.025234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.066 [2024-11-17 14:37:04.025268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.066 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.025529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.025563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.025862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.025895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.026171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.026204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.026391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.026427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.026571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.026605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.026805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.026839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.027123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.027156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.027441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.027482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.027755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.027788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.028076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.028108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.028382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.028416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.028544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.028578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.028852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.028885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.029159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.029193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.029476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.029511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.029691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.029724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.029870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.029902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.030084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.030117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.030318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.030368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.030536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.030575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.030860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.030894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.031121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.031155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.031289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.031323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.031594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.031674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.031747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc4af0 (9): Bad file descriptor 00:27:15.067 [2024-11-17 14:37:04.032129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.032206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.032487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.032528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.032816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.032851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.033059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.033093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.033288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.033323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.033475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.033513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.033799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.033832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.034083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.034115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.034418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.034452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.034660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.034693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.034913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.034946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.035249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.035282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.035549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.067 [2024-11-17 14:37:04.035583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.067 qpair failed and we were unable to recover it. 00:27:15.067 [2024-11-17 14:37:04.035862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.035896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.036178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.036211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.036510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.036545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.036845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.036878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.037078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.037111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.037240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.037273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.037471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.037505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.037709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.037742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.038020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.038053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.038303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.038372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.038648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.038681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.038934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.038967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.039213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.039245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.039500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.039534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.039804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.039837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.040055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.040088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.040279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.040312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.040507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.040542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.040732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.040765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.040965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.040999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.041200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.041234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.041374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.041410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.041684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.041718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.041907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.041946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.042249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.042281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.042562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.042598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.042790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.042823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.043074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.043108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.043322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.043362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.043554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.043588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.043884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.043917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.044168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.044201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.044431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.044465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.044663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.044696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.044743] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:27:15.068 [2024-11-17 14:37:04.044789] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:15.068 [2024-11-17 14:37:04.044943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.044975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.045165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.045200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.045396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.068 [2024-11-17 14:37:04.045428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.068 qpair failed and we were unable to recover it. 00:27:15.068 [2024-11-17 14:37:04.045621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.045654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.045868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.045900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.046167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.046200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.046395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.046429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.046708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.046741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.046970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.047003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.047286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.047317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.047589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.047624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.047915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.047948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.048223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.048256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.048543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.048577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.048774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.048807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.049111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.049144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.049325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.049364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.049553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.049587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.049856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.049889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.050098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.050130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.050259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.050293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.050580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.050615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.050914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.050946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.051166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.051198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.051466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.051500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.051708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.051741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.051925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.051957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.052223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.052256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.052445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.052494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.052697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.052732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.052993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.053025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.053234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.053268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.053534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.053569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.053689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.053722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.053992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.054025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.054199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.054232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.054434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.054470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.054733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.054766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.054967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.055000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.055243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.055275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.055524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.069 [2024-11-17 14:37:04.055558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.069 qpair failed and we were unable to recover it. 00:27:15.069 [2024-11-17 14:37:04.055743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.055776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.056058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.056092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.056328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.056373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.056558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.056592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.056815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.056847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.057118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.057150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.057366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.057402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.057694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.057728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.058009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.058043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.058219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.058253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.058548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.058584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.058886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.058920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.059116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.059149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.059289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.059322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.059518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.059555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.059825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.059858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.060045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.060078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.060348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.060407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.060696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.060729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.061005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.061038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.061279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.061312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.061632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.061666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.061961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.061993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.062258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.062290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.062547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.062581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.062766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.062800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.062972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.063004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.063280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.063319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.063642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.063680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.063891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.063923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.064104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.064137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.064412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.064447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.064675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.064707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.064956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.064988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.065238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.065271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.065523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.065556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.065849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.065882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.066150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.070 [2024-11-17 14:37:04.066182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.070 qpair failed and we were unable to recover it. 00:27:15.070 [2024-11-17 14:37:04.066373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.066406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.066676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.066709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.066851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.066883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.067019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.067053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.067228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.067260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.067405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.067439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.067620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.067652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.067923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.067955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.068200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.068233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.068414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.068450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.068671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.068705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.068950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.068982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.069245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.069278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.069523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.069557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.069852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.069885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.070088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.070121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.070395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.070436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.070713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.070746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.070968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.071002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.071247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.071284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.071477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.071511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.071646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.071680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.071927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.071960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.072180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.072215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.072412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.072448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.072666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.072699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.072872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.072905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.073176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.073209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.073442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.073478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.073610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.073644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.073824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.073859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.074050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.074083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.074194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.074224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.071 [2024-11-17 14:37:04.074492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.071 [2024-11-17 14:37:04.074527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.071 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.074774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.074807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.075099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.075133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.075474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.075510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.075755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.075787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.076097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.076131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.076380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.076416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.076676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.076709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.076834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.076867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.077045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.077078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.077364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.077398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.077689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.077722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.077982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.078014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.078203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.078236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.078498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.078532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.078736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.078768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.078896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.078929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.079197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.079228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.079501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.079535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.079819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.079852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.080127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.080159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.080374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.080409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.080640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.080674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.080797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.080836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.081105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.081137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.081254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.081287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.081502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.081536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.081730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.081763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.082072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.082105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.082323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.082379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.082520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.082553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.082827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.082860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.083040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.083073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.083288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.083320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.083472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.083506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.083797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.083832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.084102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.084135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.084288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.084322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.072 [2024-11-17 14:37:04.084542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.072 [2024-11-17 14:37:04.084581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.072 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.084832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.084865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.084991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.085023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.085287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.085318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.085525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.085558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.085798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.085830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.086023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.086056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.086232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.086263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.086454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.086489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.086684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.086717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.086933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.086965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.087155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.087187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.087374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.087409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.087598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.087629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.087846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.087878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.088078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.088111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.088244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.088276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.088517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.088551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.088826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.088859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.089144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.089176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.089429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.089462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.089674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.089707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.089830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.089862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.090149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.090181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.090307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.090338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.090546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.090586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.090878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.090911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.091168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.091200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.091389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.091423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.091595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.091628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.091868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.091900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.092093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.092125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.092372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.092407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.092560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.092592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.092778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.092810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.093121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.093154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.093334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.093373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.093556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.093589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.073 qpair failed and we were unable to recover it. 00:27:15.073 [2024-11-17 14:37:04.093861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.073 [2024-11-17 14:37:04.093894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.094110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.094144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.094347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.094386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.094658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.094691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.094939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.094971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.095143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.095174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.095438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.095472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.095657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.095689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.095861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.095894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.096168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.096200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.096492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.096525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.096791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.096824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.097038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.097070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.097330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.097370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.097565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.097598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.097726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.097758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.098029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.098061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.098249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.098281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.098483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.098516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.098756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.098788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.099098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.099130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.099317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.099349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.099597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.099630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.099764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.099796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.099990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.100021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.100245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.100277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.100385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.100419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.100601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.100638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.100829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.100861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.101070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.101103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.101372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.101406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.101645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.101677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.101798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.101830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.102012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.102044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.102305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.102337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.102553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.102585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.102760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.102792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.102968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.103001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.074 [2024-11-17 14:37:04.103208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.074 [2024-11-17 14:37:04.103240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.074 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.103444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.103476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.103590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.103623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.103826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.103859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.104145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.104176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.104442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.104475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.104650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.104682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.104946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.104978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.105184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.105216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.105397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.105431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.105636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.105668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.105862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.105893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.106202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.106234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.106424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.106455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.106696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.106728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.106918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.106950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.107136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.107167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.107282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.107314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.107459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.107492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.107734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.107768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.108061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.108093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.108272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.108304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.108432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.108465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.108655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.108704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.108994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.109026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.109215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.109247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.109425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.109469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.109686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.109718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.109855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.109887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.110180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.110218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.110389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.110422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.110603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.110635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.110758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.110791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.110989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.111021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.111206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.075 [2024-11-17 14:37:04.111238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.075 qpair failed and we were unable to recover it. 00:27:15.075 [2024-11-17 14:37:04.111504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.111538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.111780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.111812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.111935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.111967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.112089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.112121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.112368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.112401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.112577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.112609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.112853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.112885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.113173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.113205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.113425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.113458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.113652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.113684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.113977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.114009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.114273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.114306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.114601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.114634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.114820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.114852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.115059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.115092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.115296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.115327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.115577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.115610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.115845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.115877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.116060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.116091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.116271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.116303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.116550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.116582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.116784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.116816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.117080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.117112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.117342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.117383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.117620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.117652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.117894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.117926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.118183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.118215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.118388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.118422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.118620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.118652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.118893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.118925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.119213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.119244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.119511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.119543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.119725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.119757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.120022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.120053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.120245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.120282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.120468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.120501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.120781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.076 [2024-11-17 14:37:04.120814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.076 qpair failed and we were unable to recover it. 00:27:15.076 [2024-11-17 14:37:04.120935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.120968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.121203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.121236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.121403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.121436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.121626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.121658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.121845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.121878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.122053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.122085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.122321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.122364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.122600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.122633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.122852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.122883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.123083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.123115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.123310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.123342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.123615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.123648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.123887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.123919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.124158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.124190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.124458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.124491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.124667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.124699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.124902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.124934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.125196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.125228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.125492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.125524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.125713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.125745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.125980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.126012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.126211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.126243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.126434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.126467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.126657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.126689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.126817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.126849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.127033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.127065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.127326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.127366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.127573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.127605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.127781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.127813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.128018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.128051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.128246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.128277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.128529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.128563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.128683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.128715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.128739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:15.077 [2024-11-17 14:37:04.128901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.128933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.129176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.129209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.129491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.129524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.129739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.129771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.077 [2024-11-17 14:37:04.129918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.077 [2024-11-17 14:37:04.129964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.077 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.130162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.130197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.130432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.130464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.130725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.130758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.130937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.130970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.131151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.131183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.131371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.131406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.131594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.131626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.131908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.131941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.132217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.132249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.132527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.132563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.132699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.132732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.132975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.133008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.133250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.133290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.133505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.133539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.133750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.133782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.134033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.134066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.134328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.134369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.134493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.134526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.134790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.134823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.135061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.135094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.135359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.135394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.135637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.135669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.135932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.135965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.136204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.136238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.136410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.136444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.136629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.136661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.136936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.136970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.137153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.137185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.137450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.137485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.137670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.137703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.137893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.137927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.138172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.138204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.138389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.138425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.138542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.138591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.138773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.138807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.139012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.139047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.078 [2024-11-17 14:37:04.139217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.078 [2024-11-17 14:37:04.139251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.078 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.139513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.139549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.139817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.139850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.140011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.140061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.140361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.140397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.140668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.140701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.140882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.140917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.141131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.141163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.141340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.141381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.141620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.141652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.141888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.141921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.142181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.142213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.142450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.142485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.142746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.142779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.143063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.143095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.143345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.143387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.143599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.143639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.143828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.143860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.144112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.144143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.144383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.144417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.144602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.144635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.144894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.144927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.145189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.145222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.145471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.145504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.145762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.145794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.146078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.146111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.146385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.146419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.146685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.146717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.146996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.147029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.147216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.147248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.147515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.147549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.147788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.147820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.148081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.148113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.148285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.148317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.148533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.148566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.148803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.148836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.149071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.149104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.149371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.149404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.079 [2024-11-17 14:37:04.149522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.079 [2024-11-17 14:37:04.149554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.079 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.149716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.149749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.149959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.149991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.150163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.150195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.150431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.150466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.150671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.150723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.150999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.151034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.151316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.151349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.151623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.151655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.151924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.151956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.152191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.152223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.152400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.152433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.152717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.152749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.153019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.153052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.153260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.153292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.153433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.153466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.153729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.153761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.154023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.154056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.154303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.154335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.154537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.154570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.154782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.154814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.154995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.155027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.155264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.155296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.155483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.155515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.155697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.155730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.155912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.155943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.156122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.156154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.156350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.156394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.156655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.156688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.156969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.157001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.157252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.157284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.157594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.157628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.157798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.157838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.157965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.157997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.158184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.080 [2024-11-17 14:37:04.158216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.080 qpair failed and we were unable to recover it. 00:27:15.080 [2024-11-17 14:37:04.158422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.158456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.158702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.158734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.159018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.159050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.159315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.159347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.159629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.159661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.159842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.159874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.160068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.160100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.160288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.160320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.160579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.160619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.160904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.160938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.161151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.161182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.161430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.161466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.161643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.161676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.161864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.161896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.162162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.162195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.162437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.162473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.162681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.162713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.162946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.162978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.163242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.163274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.163522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.163555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.163756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.163788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.163989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.164020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.164279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.164311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.164511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.164544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.164762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.164798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.164984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.165017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.165209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.165241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.165509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.165544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.165738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.165771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.166019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.166054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.166374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.166409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.166652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.166684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.166888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.166919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.167089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.167120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.167293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.167324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.167513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.167550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.167760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.167795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.081 [2024-11-17 14:37:04.168059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.081 [2024-11-17 14:37:04.168096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.081 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.168386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.168419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.168609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.168642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.168839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.168873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.169127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.169161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.169448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.169481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.169717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.169749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.170020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.170054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.170224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.170256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.170522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.170557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.170796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.170830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.171022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.171056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.171344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.171387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.171514] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:15.082 [2024-11-17 14:37:04.171544] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:15.082 [2024-11-17 14:37:04.171554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:15.082 [2024-11-17 14:37:04.171561] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:15.082 [2024-11-17 14:37:04.171566] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:15.082 [2024-11-17 14:37:04.171670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.171701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.171966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.172000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.172125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.172158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.172326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.172366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.172623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.172656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.172914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.172946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.173155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.173188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.173140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:15.082 [2024-11-17 14:37:04.173247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:15.082 [2024-11-17 14:37:04.173324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.173356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:15.082 [2024-11-17 14:37:04.173375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.173366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:15.082 [2024-11-17 14:37:04.173563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.173594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.173762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.173793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.173979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.174011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.174140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.174172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.174457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.174491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.174663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.174696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.174879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.174912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.175100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.175133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.175395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.175428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.175668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.175700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.175882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.175914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.176173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.176206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.176444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.176476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.176648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.176682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.082 [2024-11-17 14:37:04.176860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.082 [2024-11-17 14:37:04.176892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.082 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.177097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.177130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.177305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.177342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.177615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.177647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.177816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.177849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.178030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.178062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.178177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.178208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.178470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.178504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.178794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.178826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.178944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.178976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.179212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.179244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.179551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.179584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.179781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.179814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.179995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.180027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.180264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.180297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.180518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.180552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.180736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.180769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.181031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.181063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.181268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.181301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.181510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.181544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.181785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.181816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.182082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.182114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.182401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.182436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.182580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.182611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.182821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.182853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.183027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.183060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.183272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.183304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.183518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.183552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.183841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.183874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.184138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.184170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.184274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.184307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.184574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.184608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.184890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.184923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.185195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.185227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.185406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.185440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.185559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.185590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.185852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.185886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.186019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.186050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.083 [2024-11-17 14:37:04.186309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.083 [2024-11-17 14:37:04.186342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.083 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.186546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.186581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.186817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.186851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.187120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.187155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.187422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.187463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.187725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.187757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.187866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.187898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.188106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.188139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.188315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.188348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.188541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.188572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.188838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.188871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.189158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.189191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.189405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.189438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.189702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.189736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.189990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.190023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.190193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.190226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.190471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.190505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.190690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.190724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.190968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.191002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.191176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.191209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.191447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.191481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.191668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.191702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.191873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.191906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.192029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.192065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.192190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.192221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.192423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.192458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.192674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.192708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.192830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.192863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.193158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.193192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.193378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.193413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.193676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.193710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.193893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.193925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.194192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.194224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.194513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.194547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.194762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.194794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.194922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.194955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.195205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.195239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.195434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.195471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.084 [2024-11-17 14:37:04.195659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.084 [2024-11-17 14:37:04.195693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.084 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.195877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.195909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.196098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.196131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.196275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.196308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.196552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.196586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.196755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.196789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.197041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.197081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.197291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.197325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.197457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.197491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.197744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.197778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.197970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.198002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.198266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.198299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.198428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.198463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.198578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.198610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.198798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.198832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.199162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.199195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.199390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.199424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.199601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.199633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.199822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.199854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.200113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.200148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.200394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.200428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.200632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.200665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.200870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.200904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.201143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.201176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.201359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.201392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.201569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.201602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.201733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.201765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.201943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.201976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.202229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.202263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.202451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.202484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.202662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.202694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.202932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.202965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.203134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.203167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.203367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.085 [2024-11-17 14:37:04.203404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.085 qpair failed and we were unable to recover it. 00:27:15.085 [2024-11-17 14:37:04.203522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.203555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.203766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.203799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.204033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.204067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.204249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.204282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.204547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.204581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.204869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.204901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.205007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.205040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.205227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.205260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.205498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.205532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.205741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.205774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.205944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.205977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.206169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.206202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.206466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.206506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.206638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.206671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.206916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.206948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.207140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.207173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.207296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.207333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.207492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.207525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.207781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.207813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.208072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.208106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.208237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.208269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.208444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.208478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.208716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.208748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.209017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.209050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.209332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.209377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.209639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.209672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.209933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.209967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.210155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.210189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.210373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.210407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.210595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.210626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.210819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.210852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.211130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.211162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.211332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.211371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.211634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.211666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.211933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.211966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.212208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.212241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.212487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.212519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.086 [2024-11-17 14:37:04.212702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.086 [2024-11-17 14:37:04.212735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.086 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.212919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.212951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.213202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.213267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.213573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.213623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.213844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.213878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.214140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.214173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.214475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.214510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.214724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.214757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.214891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.214923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.215051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.215084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.215322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.215363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.215549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.215581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.215766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.215798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.215970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.216004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.216213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.216246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.216526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.216559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.216684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.216717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.216906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.216940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.217199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.217232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.217413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.217446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.217652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.217685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.217859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.217891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.218067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.218098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.218303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.218336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.218601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.218633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.218892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.218925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.219093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.219125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.219302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.219336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.219478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.219512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.219693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.219729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.219991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.220024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.220307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.220340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.220614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.220646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.220820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.220852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.221034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.221066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.221253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.221285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.221404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.221438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.221661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.221694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.221892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.087 [2024-11-17 14:37:04.221925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.087 qpair failed and we were unable to recover it. 00:27:15.087 [2024-11-17 14:37:04.222106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.222138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.222317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.222349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.222644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.222677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.222813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.222852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.223116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.223148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.223360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.223395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.223565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.223597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.223780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.223813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.224001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.224035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.224244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.224278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.224466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.224500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.224669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.224702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.224837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.224869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.225128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.225160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.225402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.225435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.225646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.225679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.225941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.225973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.226165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.226198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.226457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.226491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.226695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.226726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.226976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.227010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.227268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.227301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.227489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.227524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.227788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.227822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.228048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.228081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.228271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.228304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.228527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.228562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.228748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.228781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.228954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.228987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.229167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.229202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.229486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.229537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.229822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.229859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.230109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.230141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.230449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.230485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.230693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.230730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.230978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.231014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.231133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.231166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.231430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.088 [2024-11-17 14:37:04.231466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.088 qpair failed and we were unable to recover it. 00:27:15.088 [2024-11-17 14:37:04.231651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.231685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.231875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.231908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.232078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.232112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.232366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.232401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.232664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.232697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.232981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.233014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.233152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.233185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.233378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.233412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.233624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.233656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.233888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.233921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.234091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.234124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.234367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.234402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.234661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.234695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.234978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.235011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.235230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.235263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.235389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.235423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.235709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.235742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.235918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.235951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.236205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.236239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.236484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.236518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.236757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.236790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.236907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.236940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.237120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.237154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.237426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.237459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.237637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.237670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.237791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.237824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.238030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.238062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.238236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.238269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.238447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.238514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.238654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.238687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.238807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.238840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.239027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.239060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.089 [2024-11-17 14:37:04.239207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.089 [2024-11-17 14:37:04.239255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.089 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.239428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.239464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.239702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.239736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.239979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.240013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.240136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.240170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.240416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.240450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.240567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.240601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.240785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.240818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.240927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.240961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.241173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.241206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.241411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.241444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.241619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.241652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.241893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.241926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.242096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.242128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.242278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.242310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.242531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.242565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.242824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.242858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.243120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.243154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.243401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.243436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.243562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.243595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.243783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.243815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.244026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.244059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.244253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.244287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.244475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.244509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.244701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.244733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.245041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.245074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.245275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.245308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.245457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.245491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.245685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.245719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.245916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.245949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.090 [2024-11-17 14:37:04.246228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.090 [2024-11-17 14:37:04.246262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.090 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.246379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.246414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.246588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.246622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.246870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.246903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.247143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.247175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.247362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.247397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.247635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.247668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.247862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.247894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.248101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.248134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.248387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.248421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.248618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.248657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.248854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.248888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.249064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.249096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.249270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.249302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.249503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.249536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.249708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.249743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.249924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.249957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.250191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.250222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.250426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.250461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.250644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.250676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.250872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.250906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.251142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.251176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.251384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.251419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.251612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.251646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.251827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.251861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.252124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.252157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.252391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.252427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.252693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.252728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-11-17 14:37:04.252955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-11-17 14:37:04.252989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.253254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.253286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.253555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.253590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.253849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.253883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.254126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.254159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.254336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.254379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.254495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.254529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.254706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.254740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.254929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.254961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.255075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.255110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.255304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.255337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.255554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.255587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.255761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.255793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.256046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.256079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.256255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.256288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.256481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.256517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.256715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.256748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.256938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.256971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.257165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.257197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.257391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.257425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.257628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.257661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.257900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.257933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.258115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.258154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.258392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.258426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.258666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.258698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.258832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.258866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.258989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.259022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.259227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.259260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.259450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.259485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.259770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.259803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.259931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.259965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.260151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.260185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.260376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.260410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.260676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.260708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.260884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.260917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.261092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.261124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.261313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.261346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.261498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-11-17 14:37:04.261532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-11-17 14:37:04.261789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.261822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.261935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.261968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.262142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.262176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.262418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.262452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.262626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.262659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.262834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.262867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.263039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.263073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.263377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.263410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.263695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.263728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.264030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.264064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.264266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.264299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.264437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.264471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.264658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.264691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.264927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.264960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.265153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.265186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.265449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.265484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:15.362 [2024-11-17 14:37:04.265617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.265651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.265826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:15.362 [2024-11-17 14:37:04.265859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.266042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.266075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:15.362 [2024-11-17 14:37:04.266314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.266347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:15.362 [2024-11-17 14:37:04.266556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.266589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.266762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.266795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.362 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.267016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.267064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.267268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.267302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.267583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.267618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.267874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.267908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.268178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.268210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.268446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.268479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.268719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.268752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.268983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.269016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.269202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.269235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.269428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.269462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.269577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.269610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.269798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.269829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.270085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.270117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.270308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-11-17 14:37:04.270341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-11-17 14:37:04.270558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.270592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.270852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.270885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.271141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.271174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.271427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.271461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.271661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.271693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.271885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.271917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.272093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.272126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.272372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.272407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.272579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.272611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.272724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.272756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.272948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.272981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.273143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.273176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.273444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.273481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.273722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.273760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.273894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.273927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.274196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.274230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.274438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.274472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.274691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.274724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.274856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.274889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.275077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.275109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.275364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.275399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.275585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.275619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.275794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.275827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.276029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.276061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.276216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.276249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.276434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.276468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.276618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.276650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.276776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.276809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.277063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.277095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.277273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.277306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.277564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.277599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.277777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.277810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.278020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.278054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.278244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.278276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.278522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.278557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.278683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.278716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.278888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.278920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.279139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.279172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-11-17 14:37:04.279316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-11-17 14:37:04.279348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.279618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.279651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.279763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.279807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.279993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.280026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.280200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.280232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.280376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.280411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.280534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.280566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.280693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.280726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.280860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.280891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.281016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.281048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.281170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.281202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.281452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.281484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.281591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.281623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.281748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.281781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.281902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.281934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.282052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.282084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.282206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.282238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.282380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.282414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.282602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.282635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.282818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.282850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.282989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.283020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.283133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.283165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.283349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.283394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.283572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.283605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.283710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.283743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.283931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.283964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.284143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.284175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.284293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.284326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.284463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.284498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.284687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.284720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.284896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.284929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.285048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.285081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.285192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-11-17 14:37:04.285223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-11-17 14:37:04.285335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.285375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.285488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.285522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.285693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.285724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.285895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.285928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.286044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.286076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.286200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.286232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.286364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.286399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.286520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.286552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.286676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.286709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.286830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.286862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6ba0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.287004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.287058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.287191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.287226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.287370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.287407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.287531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.287564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.287735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.287768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.287940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.287972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.288234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.288267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.288388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.288421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.288525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.288558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.288681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.288714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.288899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.288931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.289041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.289073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.289181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.289213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.289397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.289439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.289565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.289598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.289772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.289804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.289979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.290010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.290137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.290177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.290346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.290388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.290513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.290546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.290655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.290688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.290804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.290837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.291025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.291058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.291178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.291210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.291383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.291419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.291523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.291555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.291662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.291695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.291804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-11-17 14:37:04.291839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-11-17 14:37:04.291958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.291991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.292116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.292162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.292270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.292304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.292490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.292524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.292703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.292735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.292839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.292872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.292980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.293012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.293116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.293149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.293273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.293306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.293483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.293517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.293623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.293657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.293790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.293822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f518c000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.293948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.293987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.294109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.294141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.294311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.294345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.294460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.294491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.294667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.294699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.294878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.294910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.295018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.295049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.295179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.295212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.295388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.295423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.295533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.295566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.295691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.295724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.295835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.295877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.296052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.296083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.296215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.296254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.296370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.296404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.296534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.296567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.296674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.296708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.296818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.296851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.297024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.297057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.297160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.297192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.297313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.297347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.297482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.297515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.297693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.297727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.297844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.297877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.298003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.298035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.298139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-11-17 14:37:04.298171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-11-17 14:37:04.298414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.298449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.298648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.298682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.298856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.298889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.299020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.299053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.299165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.299198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.299380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.299415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.299524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.299557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.299681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.299714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.299837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.299872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.299983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.300016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.300127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.300175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.300367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.300402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.300528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.300560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.300675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.300707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.300825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.300873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.301004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.301038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.301157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.301189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.301370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.301405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.301597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.301630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.301740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.301771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.301955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.301987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:15.367 [2024-11-17 14:37:04.302169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.302202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.302336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.302380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:15.367 [2024-11-17 14:37:04.302554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.302588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.302764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.302796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.367 [2024-11-17 14:37:04.302968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.303001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.303126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.367 [2024-11-17 14:37:04.303159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.303278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.303309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.303501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.303534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.303732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.303764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.303874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.303907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.304086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.304119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.304222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.304255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.304373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.304408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.304604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.304636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.304816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-11-17 14:37:04.304849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-11-17 14:37:04.305035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.305067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.305179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.305211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.305319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.305360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.305506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.305540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.305659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.305699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.305935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.305967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.306086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.306118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.306292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.306324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.306444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.306476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.306686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.306718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.306833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.306865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.306979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.307009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.307120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.307152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.307326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.307364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.307619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.307651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.307821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.307853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.307972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.308007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.308198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.308231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.308435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.308468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.308588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.308619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.308721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.308753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.308930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.308963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.309146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.309179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.309369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.309404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.309531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.309563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.309674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.309705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.309814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.309845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.310035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.310066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.310218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.310250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.310374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.310407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.310521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.310553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.310816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.310848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.310978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.311010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.311138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.311169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.311382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.311415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.311604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.311635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.311825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.311857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.312132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.312164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-11-17 14:37:04.312450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-11-17 14:37:04.312483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.312753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.312785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.312906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.312938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.313077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.313108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.313379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.313411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.313591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.313623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.313828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.313861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.314149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.314180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.314444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.314476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.314659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.314691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.314896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.314927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.315101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.315133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.315312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.315344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.315556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.315589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.315778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.315811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.315999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.316031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.316228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.316259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.316466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.316500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.316684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.316720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.316932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.316964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.317164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.317196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.317372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.317406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.317590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.317622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.317760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.317791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.317918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.317950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.318212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.318244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.318448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.318482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.318597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.318628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.318753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.318785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.318975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.319007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.319174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.319205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.319391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.319424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.319548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.319580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-17 14:37:04.319842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-17 14:37:04.319874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.320160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.320192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.320488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.320521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.320781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.320813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.321021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.321053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.321186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.321218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.321469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.321502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.321686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.321718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.321904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.321936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.322123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.322155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.322340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.322383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.322503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.322535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.322728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.322761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.322879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.322910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.323101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.323133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.323376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.323408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.323699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.323731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.323934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.323966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.324137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.324170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.324431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.324464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.324681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.324714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.324953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.324985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.325108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.325139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.325317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.325350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.325599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.325632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.325868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.325906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.326161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.326194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.326483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.326515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.326781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.326813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.327022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.327054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.327305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.327338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.327593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.327625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.327797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.327829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.328001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.328032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.328314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.328346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.328627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.328660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.328929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.328960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.329246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.329278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.329471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-17 14:37:04.329505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-17 14:37:04.329707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.329740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.329923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.329955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.330172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.330204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.330373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.330407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.330534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.330565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.330752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.330784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.330989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.331021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.331210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.331241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.331501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.331534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.331772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.331804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.332126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.332158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.332358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.332392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.332656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.332689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.332873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.332905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.333104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.333137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.333375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.333408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.333621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.333653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.333914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.333946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.334137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.334170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.334429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.334463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.334646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.334678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.334846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.334877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.335135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.335167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.335452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.335487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.335675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.335707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.335946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.335979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.336177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.336223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.336490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.336524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.336700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.336733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.336912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.336944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.337054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.337086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.337322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.337383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.337508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.337539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.337797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.337829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.338005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.338037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.338277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.338309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.338511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.338545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.338718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-17 14:37:04.338751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-17 14:37:04.338993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.339025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.339204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.339236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.339373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.339407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.339599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.339631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.339825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.339858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 Malloc0 00:27:15.372 [2024-11-17 14:37:04.340114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.340155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.340364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.340397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.340581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.340613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.340850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.372 [2024-11-17 14:37:04.340883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.341173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.341205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:15.372 [2024-11-17 14:37:04.341392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.341426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.372 [2024-11-17 14:37:04.341691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.341723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.341840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.341872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b9 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.372 0 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.342057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.342095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.342295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.342326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.342556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.342590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.342770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.342801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.343038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.343069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.343251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.343283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.343523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.343557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.343793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.343825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.344086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.344118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.344362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.344395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.344504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.344536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.344791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.344822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.344997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.345028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.345269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.345300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.345594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.345627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.345893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.345926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.346167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.346198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.346382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.346416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.346627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.346659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.346812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.346844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.347016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.347047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.347307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.347339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.347540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-17 14:37:04.347573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-17 14:37:04.347723] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:15.372 [2024-11-17 14:37:04.347836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.347868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.348057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.348089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.348219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.348251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.348425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.348458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.348747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.348779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.349045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.349078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.349323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.349380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.349648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.349680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.349869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.349901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.350091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.350123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.350370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.350404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.350574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.350606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.350893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.350924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.351135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.351167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.351432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.351466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.351650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.351682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.351889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.351921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.352127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.352159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.352332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.352372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.352630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.352661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.352868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.352900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.353021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.353052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.353315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.353346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.353640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.353672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.353937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.353970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.354262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.354293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.354564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.354596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.354822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.354854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.355037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.355068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.355218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.355249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.355505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.355544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.355795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.355827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.356029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.356061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-17 14:37:04.356326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-17 14:37:04.356366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.374 [2024-11-17 14:37:04.356541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.356572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.356783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:15.374 [2024-11-17 14:37:04.356815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.357003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.357035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.374 [2024-11-17 14:37:04.357301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.357333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.374 [2024-11-17 14:37:04.357453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.357485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.357697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.357730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.357989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.358020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.358204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.358235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.358504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.358537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.358743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.358774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.359039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.359071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.359286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.359319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.359584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.359628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.359760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.359793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.359965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.359998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.360276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.360308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.360503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.360537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.360723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.360756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.360936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.360967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.361158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.361191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.361409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.361444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5190000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.361578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.361613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.361832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.361864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.362056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.362087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.362210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.362242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.362504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.362536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.362774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.362805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.363062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.363094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.363280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.363312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.363536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.363570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.363746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.363777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.363949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.363980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.364261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.364293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.374 [2024-11-17 14:37:04.364478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.364512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-17 14:37:04.364689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-17 14:37:04.364721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:15.375 [2024-11-17 14:37:04.364988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.365021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.375 [2024-11-17 14:37:04.365190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.365222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.375 [2024-11-17 14:37:04.365506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.365539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.365745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.365777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.366016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.366047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.366229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.366260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.366544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.366578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.366829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.366861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.367052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.367084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.367320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.367361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.367601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.367633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.367829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.367861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.367970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.368002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.368284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.368315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.368581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.368613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.368889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.368922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.369205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.369237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.369499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.369531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.369815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.369847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.370032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.370064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.370307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.370340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.370540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.370573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.370758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.370789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.370963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.370995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.371203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.371240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.371508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.371540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.371716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.371748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.371986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.372018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.372146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.372177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.372360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.372392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.375 [2024-11-17 14:37:04.372564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.372596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.372848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:15.375 [2024-11-17 14:37:04.372879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.373101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.373133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.375 [2024-11-17 14:37:04.373304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.373336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.375 [2024-11-17 14:37:04.373570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.373602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-17 14:37:04.373855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-17 14:37:04.373887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-17 14:37:04.374061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-17 14:37:04.374092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-17 14:37:04.374280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-17 14:37:04.374312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-17 14:37:04.374444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-17 14:37:04.374477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-17 14:37:04.374658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-17 14:37:04.374688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-17 14:37:04.374864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-17 14:37:04.374896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-17 14:37:04.375140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-17 14:37:04.375172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-17 14:37:04.375419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-17 14:37:04.375452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-17 14:37:04.375704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-17 14:37:04.375736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5198000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-17 14:37:04.375940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:15.376 [2024-11-17 14:37:04.378401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.376 [2024-11-17 14:37:04.378533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.376 [2024-11-17 14:37:04.378578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.376 [2024-11-17 14:37:04.378602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.376 [2024-11-17 14:37:04.378625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.376 [2024-11-17 14:37:04.378678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.376 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:15.376 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.376 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.376 [2024-11-17 14:37:04.388300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.376 [2024-11-17 14:37:04.388413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.376 [2024-11-17 14:37:04.388456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.376 [2024-11-17 14:37:04.388482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.376 [2024-11-17 14:37:04.388504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.376 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.376 [2024-11-17 14:37:04.388552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 14:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1622481 00:27:15.376 [2024-11-17 14:37:04.398310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.376 [2024-11-17 14:37:04.398400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.376 [2024-11-17 14:37:04.398428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.376 [2024-11-17 14:37:04.398443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.376 [2024-11-17 14:37:04.398457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.376 [2024-11-17 14:37:04.398489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-17 14:37:04.408306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.376 [2024-11-17 14:37:04.408378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.376 [2024-11-17 14:37:04.408397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.376 [2024-11-17 14:37:04.408407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.376 [2024-11-17 14:37:04.408416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.376 [2024-11-17 14:37:04.408437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-17 14:37:04.418358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.376 [2024-11-17 14:37:04.418461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.376 [2024-11-17 14:37:04.418476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.376 [2024-11-17 14:37:04.418483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.376 [2024-11-17 14:37:04.418490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.376 [2024-11-17 14:37:04.418506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-17 14:37:04.428305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.376 [2024-11-17 14:37:04.428363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.376 [2024-11-17 14:37:04.428377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.376 [2024-11-17 14:37:04.428385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.376 [2024-11-17 14:37:04.428391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.376 [2024-11-17 14:37:04.428407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-17 14:37:04.438333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.376 [2024-11-17 14:37:04.438389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.376 [2024-11-17 14:37:04.438403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.376 [2024-11-17 14:37:04.438410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.376 [2024-11-17 14:37:04.438417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.376 [2024-11-17 14:37:04.438432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-17 14:37:04.448405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.376 [2024-11-17 14:37:04.448486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.376 [2024-11-17 14:37:04.448501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.376 [2024-11-17 14:37:04.448508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.376 [2024-11-17 14:37:04.448514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.376 [2024-11-17 14:37:04.448530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-17 14:37:04.458459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.376 [2024-11-17 14:37:04.458517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.376 [2024-11-17 14:37:04.458531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.376 [2024-11-17 14:37:04.458538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.376 [2024-11-17 14:37:04.458546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.376 [2024-11-17 14:37:04.458562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-17 14:37:04.468439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.376 [2024-11-17 14:37:04.468500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.377 [2024-11-17 14:37:04.468518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.377 [2024-11-17 14:37:04.468526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.377 [2024-11-17 14:37:04.468532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.377 [2024-11-17 14:37:04.468548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-17 14:37:04.478461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.377 [2024-11-17 14:37:04.478520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.377 [2024-11-17 14:37:04.478534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.377 [2024-11-17 14:37:04.478541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.377 [2024-11-17 14:37:04.478547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.377 [2024-11-17 14:37:04.478563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-17 14:37:04.488507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.377 [2024-11-17 14:37:04.488571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.377 [2024-11-17 14:37:04.488585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.377 [2024-11-17 14:37:04.488592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.377 [2024-11-17 14:37:04.488599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.377 [2024-11-17 14:37:04.488614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-17 14:37:04.498521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.377 [2024-11-17 14:37:04.498600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.377 [2024-11-17 14:37:04.498614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.377 [2024-11-17 14:37:04.498621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.377 [2024-11-17 14:37:04.498628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.377 [2024-11-17 14:37:04.498643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-17 14:37:04.508548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.377 [2024-11-17 14:37:04.508600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.377 [2024-11-17 14:37:04.508614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.377 [2024-11-17 14:37:04.508624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.377 [2024-11-17 14:37:04.508632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.377 [2024-11-17 14:37:04.508647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-17 14:37:04.518583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.377 [2024-11-17 14:37:04.518638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.377 [2024-11-17 14:37:04.518652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.377 [2024-11-17 14:37:04.518659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.377 [2024-11-17 14:37:04.518666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.377 [2024-11-17 14:37:04.518682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-17 14:37:04.528614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.377 [2024-11-17 14:37:04.528688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.377 [2024-11-17 14:37:04.528702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.377 [2024-11-17 14:37:04.528709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.377 [2024-11-17 14:37:04.528715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.377 [2024-11-17 14:37:04.528731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-17 14:37:04.538628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.377 [2024-11-17 14:37:04.538680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.377 [2024-11-17 14:37:04.538694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.377 [2024-11-17 14:37:04.538701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.377 [2024-11-17 14:37:04.538707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.377 [2024-11-17 14:37:04.538723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-17 14:37:04.548643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.377 [2024-11-17 14:37:04.548697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.377 [2024-11-17 14:37:04.548711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.377 [2024-11-17 14:37:04.548718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.377 [2024-11-17 14:37:04.548724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.377 [2024-11-17 14:37:04.548739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-17 14:37:04.558681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.377 [2024-11-17 14:37:04.558740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.377 [2024-11-17 14:37:04.558753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.377 [2024-11-17 14:37:04.558760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.377 [2024-11-17 14:37:04.558767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.377 [2024-11-17 14:37:04.558782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-17 14:37:04.568718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.377 [2024-11-17 14:37:04.568794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.377 [2024-11-17 14:37:04.568808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.377 [2024-11-17 14:37:04.568816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.377 [2024-11-17 14:37:04.568822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.377 [2024-11-17 14:37:04.568837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.638 [2024-11-17 14:37:04.578742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.638 [2024-11-17 14:37:04.578800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.638 [2024-11-17 14:37:04.578813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.638 [2024-11-17 14:37:04.578821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.638 [2024-11-17 14:37:04.578829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.638 [2024-11-17 14:37:04.578844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.638 qpair failed and we were unable to recover it. 00:27:15.638 [2024-11-17 14:37:04.588763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.638 [2024-11-17 14:37:04.588820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.638 [2024-11-17 14:37:04.588836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.638 [2024-11-17 14:37:04.588843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.638 [2024-11-17 14:37:04.588850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.638 [2024-11-17 14:37:04.588866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.638 qpair failed and we were unable to recover it. 00:27:15.638 [2024-11-17 14:37:04.598785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.638 [2024-11-17 14:37:04.598839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.638 [2024-11-17 14:37:04.598854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.638 [2024-11-17 14:37:04.598861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.638 [2024-11-17 14:37:04.598867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.638 [2024-11-17 14:37:04.598882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.638 qpair failed and we were unable to recover it. 00:27:15.639 [2024-11-17 14:37:04.608818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.639 [2024-11-17 14:37:04.608878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.639 [2024-11-17 14:37:04.608892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.639 [2024-11-17 14:37:04.608899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.639 [2024-11-17 14:37:04.608907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.639 [2024-11-17 14:37:04.608922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.639 qpair failed and we were unable to recover it. 00:27:15.639 [2024-11-17 14:37:04.618849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.639 [2024-11-17 14:37:04.618903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.639 [2024-11-17 14:37:04.618917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.639 [2024-11-17 14:37:04.618924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.639 [2024-11-17 14:37:04.618931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.639 [2024-11-17 14:37:04.618948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.639 qpair failed and we were unable to recover it. 00:27:15.639 [2024-11-17 14:37:04.628867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.639 [2024-11-17 14:37:04.628953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.639 [2024-11-17 14:37:04.628967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.639 [2024-11-17 14:37:04.628975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.639 [2024-11-17 14:37:04.628981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.639 [2024-11-17 14:37:04.628997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.639 qpair failed and we were unable to recover it. 00:27:15.639 [2024-11-17 14:37:04.638915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.639 [2024-11-17 14:37:04.638973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.639 [2024-11-17 14:37:04.638987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.639 [2024-11-17 14:37:04.638999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.639 [2024-11-17 14:37:04.639005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.639 [2024-11-17 14:37:04.639021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.639 qpair failed and we were unable to recover it. 00:27:15.639 [2024-11-17 14:37:04.648943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.639 [2024-11-17 14:37:04.649043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.639 [2024-11-17 14:37:04.649058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.639 [2024-11-17 14:37:04.649065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.639 [2024-11-17 14:37:04.649071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.639 [2024-11-17 14:37:04.649087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.639 qpair failed and we were unable to recover it. 00:27:15.639 [2024-11-17 14:37:04.658958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.639 [2024-11-17 14:37:04.659012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.639 [2024-11-17 14:37:04.659026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.639 [2024-11-17 14:37:04.659033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.639 [2024-11-17 14:37:04.659040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.639 [2024-11-17 14:37:04.659056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.639 qpair failed and we were unable to recover it. 00:27:15.639 [2024-11-17 14:37:04.668987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.639 [2024-11-17 14:37:04.669037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.639 [2024-11-17 14:37:04.669052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.639 [2024-11-17 14:37:04.669059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.639 [2024-11-17 14:37:04.669066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.639 [2024-11-17 14:37:04.669081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.639 qpair failed and we were unable to recover it. 00:27:15.639 [2024-11-17 14:37:04.679015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.639 [2024-11-17 14:37:04.679070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.639 [2024-11-17 14:37:04.679084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.639 [2024-11-17 14:37:04.679091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.639 [2024-11-17 14:37:04.679098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.639 [2024-11-17 14:37:04.679116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.639 qpair failed and we were unable to recover it. 00:27:15.639 [2024-11-17 14:37:04.688995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.639 [2024-11-17 14:37:04.689073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.639 [2024-11-17 14:37:04.689087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.639 [2024-11-17 14:37:04.689094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.639 [2024-11-17 14:37:04.689101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.639 [2024-11-17 14:37:04.689116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.639 qpair failed and we were unable to recover it. 00:27:15.639 [2024-11-17 14:37:04.699111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.639 [2024-11-17 14:37:04.699170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.639 [2024-11-17 14:37:04.699184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.639 [2024-11-17 14:37:04.699192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.639 [2024-11-17 14:37:04.699198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.639 [2024-11-17 14:37:04.699213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.639 qpair failed and we were unable to recover it. 00:27:15.639 [2024-11-17 14:37:04.709086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.639 [2024-11-17 14:37:04.709143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.639 [2024-11-17 14:37:04.709157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.639 [2024-11-17 14:37:04.709164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.639 [2024-11-17 14:37:04.709171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.639 [2024-11-17 14:37:04.709187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.639 qpair failed and we were unable to recover it. 00:27:15.639 [2024-11-17 14:37:04.719125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.639 [2024-11-17 14:37:04.719178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.639 [2024-11-17 14:37:04.719192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.639 [2024-11-17 14:37:04.719199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.639 [2024-11-17 14:37:04.719206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.639 [2024-11-17 14:37:04.719222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.639 qpair failed and we were unable to recover it. 00:27:15.639 [2024-11-17 14:37:04.729145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.639 [2024-11-17 14:37:04.729204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.639 [2024-11-17 14:37:04.729218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.639 [2024-11-17 14:37:04.729226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.639 [2024-11-17 14:37:04.729232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.639 [2024-11-17 14:37:04.729248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.639 qpair failed and we were unable to recover it. 00:27:15.639 [2024-11-17 14:37:04.739181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.640 [2024-11-17 14:37:04.739232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.640 [2024-11-17 14:37:04.739246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.640 [2024-11-17 14:37:04.739253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.640 [2024-11-17 14:37:04.739259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.640 [2024-11-17 14:37:04.739274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.640 qpair failed and we were unable to recover it. 00:27:15.640 [2024-11-17 14:37:04.749191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.640 [2024-11-17 14:37:04.749246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.640 [2024-11-17 14:37:04.749260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.640 [2024-11-17 14:37:04.749267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.640 [2024-11-17 14:37:04.749274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.640 [2024-11-17 14:37:04.749289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.640 qpair failed and we were unable to recover it. 00:27:15.640 [2024-11-17 14:37:04.759255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.640 [2024-11-17 14:37:04.759312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.640 [2024-11-17 14:37:04.759326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.640 [2024-11-17 14:37:04.759333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.640 [2024-11-17 14:37:04.759340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.640 [2024-11-17 14:37:04.759358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.640 qpair failed and we were unable to recover it. 00:27:15.640 [2024-11-17 14:37:04.769286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.640 [2024-11-17 14:37:04.769350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.640 [2024-11-17 14:37:04.769372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.640 [2024-11-17 14:37:04.769380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.640 [2024-11-17 14:37:04.769386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.640 [2024-11-17 14:37:04.769401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.640 qpair failed and we were unable to recover it. 00:27:15.640 [2024-11-17 14:37:04.779321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.640 [2024-11-17 14:37:04.779401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.640 [2024-11-17 14:37:04.779416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.640 [2024-11-17 14:37:04.779423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.640 [2024-11-17 14:37:04.779430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.640 [2024-11-17 14:37:04.779448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.640 qpair failed and we were unable to recover it. 00:27:15.640 [2024-11-17 14:37:04.789323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.640 [2024-11-17 14:37:04.789387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.640 [2024-11-17 14:37:04.789401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.640 [2024-11-17 14:37:04.789409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.640 [2024-11-17 14:37:04.789415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.640 [2024-11-17 14:37:04.789431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.640 qpair failed and we were unable to recover it. 00:27:15.640 [2024-11-17 14:37:04.799392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.640 [2024-11-17 14:37:04.799493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.640 [2024-11-17 14:37:04.799508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.640 [2024-11-17 14:37:04.799516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.640 [2024-11-17 14:37:04.799522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.640 [2024-11-17 14:37:04.799538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.640 qpair failed and we were unable to recover it. 00:27:15.640 [2024-11-17 14:37:04.809394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.640 [2024-11-17 14:37:04.809452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.640 [2024-11-17 14:37:04.809465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.640 [2024-11-17 14:37:04.809473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.640 [2024-11-17 14:37:04.809479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.640 [2024-11-17 14:37:04.809498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.640 qpair failed and we were unable to recover it. 00:27:15.640 [2024-11-17 14:37:04.819423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.640 [2024-11-17 14:37:04.819480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.640 [2024-11-17 14:37:04.819494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.640 [2024-11-17 14:37:04.819501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.640 [2024-11-17 14:37:04.819508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.640 [2024-11-17 14:37:04.819523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.640 qpair failed and we were unable to recover it. 00:27:15.640 [2024-11-17 14:37:04.829436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.640 [2024-11-17 14:37:04.829493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.640 [2024-11-17 14:37:04.829508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.640 [2024-11-17 14:37:04.829515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.640 [2024-11-17 14:37:04.829522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.640 [2024-11-17 14:37:04.829537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.640 qpair failed and we were unable to recover it. 00:27:15.640 [2024-11-17 14:37:04.839430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.640 [2024-11-17 14:37:04.839484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.640 [2024-11-17 14:37:04.839498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.640 [2024-11-17 14:37:04.839505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.640 [2024-11-17 14:37:04.839514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.640 [2024-11-17 14:37:04.839531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.640 qpair failed and we were unable to recover it. 00:27:15.640 [2024-11-17 14:37:04.849530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.640 [2024-11-17 14:37:04.849591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.640 [2024-11-17 14:37:04.849604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.640 [2024-11-17 14:37:04.849611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.640 [2024-11-17 14:37:04.849619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.640 [2024-11-17 14:37:04.849635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.640 qpair failed and we were unable to recover it. 00:27:15.901 [2024-11-17 14:37:04.859511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.901 [2024-11-17 14:37:04.859566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.901 [2024-11-17 14:37:04.859581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.901 [2024-11-17 14:37:04.859588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.901 [2024-11-17 14:37:04.859594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.901 [2024-11-17 14:37:04.859609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.901 qpair failed and we were unable to recover it. 00:27:15.901 [2024-11-17 14:37:04.869476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.901 [2024-11-17 14:37:04.869534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.901 [2024-11-17 14:37:04.869548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.901 [2024-11-17 14:37:04.869556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.901 [2024-11-17 14:37:04.869562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.901 [2024-11-17 14:37:04.869578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.901 qpair failed and we were unable to recover it. 00:27:15.901 [2024-11-17 14:37:04.879508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.901 [2024-11-17 14:37:04.879566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.901 [2024-11-17 14:37:04.879579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.901 [2024-11-17 14:37:04.879586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.901 [2024-11-17 14:37:04.879593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.901 [2024-11-17 14:37:04.879608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.901 qpair failed and we were unable to recover it. 00:27:15.901 [2024-11-17 14:37:04.889628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.901 [2024-11-17 14:37:04.889695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.901 [2024-11-17 14:37:04.889709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.901 [2024-11-17 14:37:04.889716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.901 [2024-11-17 14:37:04.889722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.901 [2024-11-17 14:37:04.889737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.901 qpair failed and we were unable to recover it. 00:27:15.901 [2024-11-17 14:37:04.899643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.901 [2024-11-17 14:37:04.899699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.901 [2024-11-17 14:37:04.899718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.901 [2024-11-17 14:37:04.899725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.901 [2024-11-17 14:37:04.899732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.901 [2024-11-17 14:37:04.899747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.901 qpair failed and we were unable to recover it. 00:27:15.901 [2024-11-17 14:37:04.909660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.901 [2024-11-17 14:37:04.909714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.901 [2024-11-17 14:37:04.909727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.901 [2024-11-17 14:37:04.909735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.901 [2024-11-17 14:37:04.909741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.901 [2024-11-17 14:37:04.909757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.901 qpair failed and we were unable to recover it. 00:27:15.901 [2024-11-17 14:37:04.919622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.901 [2024-11-17 14:37:04.919680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.901 [2024-11-17 14:37:04.919694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.901 [2024-11-17 14:37:04.919701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.901 [2024-11-17 14:37:04.919708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.901 [2024-11-17 14:37:04.919723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.901 qpair failed and we were unable to recover it. 00:27:15.901 [2024-11-17 14:37:04.929662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.901 [2024-11-17 14:37:04.929729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.902 [2024-11-17 14:37:04.929742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.902 [2024-11-17 14:37:04.929750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.902 [2024-11-17 14:37:04.929757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.902 [2024-11-17 14:37:04.929772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.902 qpair failed and we were unable to recover it. 00:27:15.902 [2024-11-17 14:37:04.939687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.902 [2024-11-17 14:37:04.939741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.902 [2024-11-17 14:37:04.939755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.902 [2024-11-17 14:37:04.939762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.902 [2024-11-17 14:37:04.939772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.902 [2024-11-17 14:37:04.939789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.902 qpair failed and we were unable to recover it. 00:27:15.902 [2024-11-17 14:37:04.949751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.902 [2024-11-17 14:37:04.949807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.902 [2024-11-17 14:37:04.949821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.902 [2024-11-17 14:37:04.949828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.902 [2024-11-17 14:37:04.949835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.902 [2024-11-17 14:37:04.949850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.902 qpair failed and we were unable to recover it. 00:27:15.902 [2024-11-17 14:37:04.959810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.902 [2024-11-17 14:37:04.959866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.902 [2024-11-17 14:37:04.959882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.902 [2024-11-17 14:37:04.959889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.902 [2024-11-17 14:37:04.959895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.902 [2024-11-17 14:37:04.959911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.902 qpair failed and we were unable to recover it. 00:27:15.902 [2024-11-17 14:37:04.969826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.902 [2024-11-17 14:37:04.969888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.902 [2024-11-17 14:37:04.969903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.902 [2024-11-17 14:37:04.969910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.902 [2024-11-17 14:37:04.969916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.902 [2024-11-17 14:37:04.969931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.902 qpair failed and we were unable to recover it. 00:27:15.902 [2024-11-17 14:37:04.979793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.902 [2024-11-17 14:37:04.979845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.902 [2024-11-17 14:37:04.979859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.902 [2024-11-17 14:37:04.979866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.902 [2024-11-17 14:37:04.979872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.902 [2024-11-17 14:37:04.979888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.902 qpair failed and we were unable to recover it. 00:27:15.902 [2024-11-17 14:37:04.989873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.902 [2024-11-17 14:37:04.989927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.902 [2024-11-17 14:37:04.989941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.902 [2024-11-17 14:37:04.989948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.902 [2024-11-17 14:37:04.989955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.902 [2024-11-17 14:37:04.989970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.902 qpair failed and we were unable to recover it. 00:27:15.902 [2024-11-17 14:37:04.999915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.902 [2024-11-17 14:37:04.999971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.902 [2024-11-17 14:37:04.999985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.902 [2024-11-17 14:37:04.999992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.902 [2024-11-17 14:37:04.999999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.902 [2024-11-17 14:37:05.000014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.902 qpair failed and we were unable to recover it. 00:27:15.902 [2024-11-17 14:37:05.009891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.902 [2024-11-17 14:37:05.009982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.902 [2024-11-17 14:37:05.009997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.902 [2024-11-17 14:37:05.010005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.902 [2024-11-17 14:37:05.010012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.902 [2024-11-17 14:37:05.010027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.902 qpair failed and we were unable to recover it. 00:27:15.902 [2024-11-17 14:37:05.019953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.902 [2024-11-17 14:37:05.020017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.902 [2024-11-17 14:37:05.020030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.902 [2024-11-17 14:37:05.020038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.902 [2024-11-17 14:37:05.020044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.902 [2024-11-17 14:37:05.020060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.902 qpair failed and we were unable to recover it. 00:27:15.902 [2024-11-17 14:37:05.029945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.902 [2024-11-17 14:37:05.029998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.902 [2024-11-17 14:37:05.030016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.902 [2024-11-17 14:37:05.030023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.902 [2024-11-17 14:37:05.030029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.902 [2024-11-17 14:37:05.030045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.902 qpair failed and we were unable to recover it. 00:27:15.902 [2024-11-17 14:37:05.039965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.902 [2024-11-17 14:37:05.040017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.902 [2024-11-17 14:37:05.040031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.902 [2024-11-17 14:37:05.040038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.902 [2024-11-17 14:37:05.040045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.902 [2024-11-17 14:37:05.040060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.902 qpair failed and we were unable to recover it. 00:27:15.902 [2024-11-17 14:37:05.050133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.902 [2024-11-17 14:37:05.050211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.902 [2024-11-17 14:37:05.050225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.902 [2024-11-17 14:37:05.050232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.902 [2024-11-17 14:37:05.050238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.902 [2024-11-17 14:37:05.050254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.902 qpair failed and we were unable to recover it. 00:27:15.902 [2024-11-17 14:37:05.060032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.902 [2024-11-17 14:37:05.060087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.902 [2024-11-17 14:37:05.060103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.903 [2024-11-17 14:37:05.060111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.903 [2024-11-17 14:37:05.060118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.903 [2024-11-17 14:37:05.060133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.903 qpair failed and we were unable to recover it. 00:27:15.903 [2024-11-17 14:37:05.070110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.903 [2024-11-17 14:37:05.070189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.903 [2024-11-17 14:37:05.070203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.903 [2024-11-17 14:37:05.070214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.903 [2024-11-17 14:37:05.070220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.903 [2024-11-17 14:37:05.070235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.903 qpair failed and we were unable to recover it. 00:27:15.903 [2024-11-17 14:37:05.080154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.903 [2024-11-17 14:37:05.080208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.903 [2024-11-17 14:37:05.080222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.903 [2024-11-17 14:37:05.080229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.903 [2024-11-17 14:37:05.080236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.903 [2024-11-17 14:37:05.080251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.903 qpair failed and we were unable to recover it. 00:27:15.903 [2024-11-17 14:37:05.090108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.903 [2024-11-17 14:37:05.090175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.903 [2024-11-17 14:37:05.090190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.903 [2024-11-17 14:37:05.090197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.903 [2024-11-17 14:37:05.090203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.903 [2024-11-17 14:37:05.090218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.903 qpair failed and we were unable to recover it. 00:27:15.903 [2024-11-17 14:37:05.100197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.903 [2024-11-17 14:37:05.100251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.903 [2024-11-17 14:37:05.100265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.903 [2024-11-17 14:37:05.100272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.903 [2024-11-17 14:37:05.100278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.903 [2024-11-17 14:37:05.100293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.903 qpair failed and we were unable to recover it. 00:27:15.903 [2024-11-17 14:37:05.110168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.903 [2024-11-17 14:37:05.110220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.903 [2024-11-17 14:37:05.110234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.903 [2024-11-17 14:37:05.110242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.903 [2024-11-17 14:37:05.110248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.903 [2024-11-17 14:37:05.110264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.903 qpair failed and we were unable to recover it. 00:27:15.903 [2024-11-17 14:37:05.120254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.903 [2024-11-17 14:37:05.120336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.903 [2024-11-17 14:37:05.120350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.903 [2024-11-17 14:37:05.120362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.903 [2024-11-17 14:37:05.120368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:15.903 [2024-11-17 14:37:05.120384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.903 qpair failed and we were unable to recover it. 00:27:16.164 [2024-11-17 14:37:05.130243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.164 [2024-11-17 14:37:05.130346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.164 [2024-11-17 14:37:05.130364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.164 [2024-11-17 14:37:05.130371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.164 [2024-11-17 14:37:05.130378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.164 [2024-11-17 14:37:05.130394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.164 qpair failed and we were unable to recover it. 00:27:16.164 [2024-11-17 14:37:05.140314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.164 [2024-11-17 14:37:05.140374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.164 [2024-11-17 14:37:05.140388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.164 [2024-11-17 14:37:05.140395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.164 [2024-11-17 14:37:05.140401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.164 [2024-11-17 14:37:05.140416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.164 qpair failed and we were unable to recover it. 00:27:16.164 [2024-11-17 14:37:05.150288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.164 [2024-11-17 14:37:05.150342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.164 [2024-11-17 14:37:05.150361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.164 [2024-11-17 14:37:05.150368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.164 [2024-11-17 14:37:05.150374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.164 [2024-11-17 14:37:05.150390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.164 qpair failed and we were unable to recover it. 00:27:16.164 [2024-11-17 14:37:05.160429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.164 [2024-11-17 14:37:05.160491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.164 [2024-11-17 14:37:05.160505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.164 [2024-11-17 14:37:05.160512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.164 [2024-11-17 14:37:05.160519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.164 [2024-11-17 14:37:05.160534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.164 qpair failed and we were unable to recover it. 00:27:16.164 [2024-11-17 14:37:05.170406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.164 [2024-11-17 14:37:05.170486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.164 [2024-11-17 14:37:05.170500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.164 [2024-11-17 14:37:05.170507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.164 [2024-11-17 14:37:05.170514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.164 [2024-11-17 14:37:05.170529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.164 qpair failed and we were unable to recover it. 00:27:16.164 [2024-11-17 14:37:05.180475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.164 [2024-11-17 14:37:05.180541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.164 [2024-11-17 14:37:05.180555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.164 [2024-11-17 14:37:05.180562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.164 [2024-11-17 14:37:05.180569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.164 [2024-11-17 14:37:05.180585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.164 qpair failed and we were unable to recover it. 00:27:16.164 [2024-11-17 14:37:05.190526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.164 [2024-11-17 14:37:05.190583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.164 [2024-11-17 14:37:05.190597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.164 [2024-11-17 14:37:05.190604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.164 [2024-11-17 14:37:05.190610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.164 [2024-11-17 14:37:05.190625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.164 qpair failed and we were unable to recover it. 00:27:16.164 [2024-11-17 14:37:05.200550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.164 [2024-11-17 14:37:05.200644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.164 [2024-11-17 14:37:05.200658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.164 [2024-11-17 14:37:05.200668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.164 [2024-11-17 14:37:05.200675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.164 [2024-11-17 14:37:05.200691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.164 qpair failed and we were unable to recover it. 00:27:16.164 [2024-11-17 14:37:05.210575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.165 [2024-11-17 14:37:05.210632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.165 [2024-11-17 14:37:05.210646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.165 [2024-11-17 14:37:05.210653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.165 [2024-11-17 14:37:05.210659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.165 [2024-11-17 14:37:05.210675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.165 qpair failed and we were unable to recover it. 00:27:16.165 [2024-11-17 14:37:05.220566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.165 [2024-11-17 14:37:05.220622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.165 [2024-11-17 14:37:05.220635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.165 [2024-11-17 14:37:05.220642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.165 [2024-11-17 14:37:05.220648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.165 [2024-11-17 14:37:05.220663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.165 qpair failed and we were unable to recover it. 00:27:16.165 [2024-11-17 14:37:05.230547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.165 [2024-11-17 14:37:05.230598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.165 [2024-11-17 14:37:05.230612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.165 [2024-11-17 14:37:05.230619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.165 [2024-11-17 14:37:05.230625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.165 [2024-11-17 14:37:05.230640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.165 qpair failed and we were unable to recover it. 00:27:16.165 [2024-11-17 14:37:05.240643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.165 [2024-11-17 14:37:05.240705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.165 [2024-11-17 14:37:05.240744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.165 [2024-11-17 14:37:05.240752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.165 [2024-11-17 14:37:05.240759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.165 [2024-11-17 14:37:05.240787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.165 qpair failed and we were unable to recover it. 00:27:16.165 [2024-11-17 14:37:05.250669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.165 [2024-11-17 14:37:05.250728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.165 [2024-11-17 14:37:05.250743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.165 [2024-11-17 14:37:05.250750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.165 [2024-11-17 14:37:05.250757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.165 [2024-11-17 14:37:05.250774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.165 qpair failed and we were unable to recover it. 00:27:16.165 [2024-11-17 14:37:05.260698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.165 [2024-11-17 14:37:05.260753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.165 [2024-11-17 14:37:05.260767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.165 [2024-11-17 14:37:05.260774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.165 [2024-11-17 14:37:05.260781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.165 [2024-11-17 14:37:05.260796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.165 qpair failed and we were unable to recover it. 00:27:16.165 [2024-11-17 14:37:05.270703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.165 [2024-11-17 14:37:05.270759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.165 [2024-11-17 14:37:05.270773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.165 [2024-11-17 14:37:05.270781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.165 [2024-11-17 14:37:05.270787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.165 [2024-11-17 14:37:05.270803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.165 qpair failed and we were unable to recover it. 00:27:16.165 [2024-11-17 14:37:05.280747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.165 [2024-11-17 14:37:05.280800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.165 [2024-11-17 14:37:05.280813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.165 [2024-11-17 14:37:05.280820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.165 [2024-11-17 14:37:05.280827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.165 [2024-11-17 14:37:05.280842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.165 qpair failed and we were unable to recover it. 00:27:16.165 [2024-11-17 14:37:05.290829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.165 [2024-11-17 14:37:05.290926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.165 [2024-11-17 14:37:05.290939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.165 [2024-11-17 14:37:05.290946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.165 [2024-11-17 14:37:05.290952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.165 [2024-11-17 14:37:05.290969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.165 qpair failed and we were unable to recover it. 00:27:16.165 [2024-11-17 14:37:05.300824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.165 [2024-11-17 14:37:05.300890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.165 [2024-11-17 14:37:05.300904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.165 [2024-11-17 14:37:05.300912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.165 [2024-11-17 14:37:05.300918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.165 [2024-11-17 14:37:05.300933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.165 qpair failed and we were unable to recover it. 00:27:16.165 [2024-11-17 14:37:05.310842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.165 [2024-11-17 14:37:05.310901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.165 [2024-11-17 14:37:05.310915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.165 [2024-11-17 14:37:05.310922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.165 [2024-11-17 14:37:05.310929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.165 [2024-11-17 14:37:05.310944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.165 qpair failed and we were unable to recover it. 00:27:16.165 [2024-11-17 14:37:05.320838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.165 [2024-11-17 14:37:05.320892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.165 [2024-11-17 14:37:05.320906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.165 [2024-11-17 14:37:05.320913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.165 [2024-11-17 14:37:05.320920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.165 [2024-11-17 14:37:05.320936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.165 qpair failed and we were unable to recover it. 00:27:16.165 [2024-11-17 14:37:05.330877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.165 [2024-11-17 14:37:05.330934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.165 [2024-11-17 14:37:05.330951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.165 [2024-11-17 14:37:05.330958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.165 [2024-11-17 14:37:05.330964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.165 [2024-11-17 14:37:05.330980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.165 qpair failed and we were unable to recover it. 00:27:16.165 [2024-11-17 14:37:05.340903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.166 [2024-11-17 14:37:05.340957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.166 [2024-11-17 14:37:05.340971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.166 [2024-11-17 14:37:05.340978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.166 [2024-11-17 14:37:05.340984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.166 [2024-11-17 14:37:05.340999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.166 qpair failed and we were unable to recover it. 00:27:16.166 [2024-11-17 14:37:05.350939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.166 [2024-11-17 14:37:05.350996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.166 [2024-11-17 14:37:05.351010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.166 [2024-11-17 14:37:05.351017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.166 [2024-11-17 14:37:05.351024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.166 [2024-11-17 14:37:05.351040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.166 qpair failed and we were unable to recover it. 00:27:16.166 [2024-11-17 14:37:05.360958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.166 [2024-11-17 14:37:05.361015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.166 [2024-11-17 14:37:05.361029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.166 [2024-11-17 14:37:05.361036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.166 [2024-11-17 14:37:05.361043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.166 [2024-11-17 14:37:05.361058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.166 qpair failed and we were unable to recover it. 00:27:16.166 [2024-11-17 14:37:05.370989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.166 [2024-11-17 14:37:05.371044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.166 [2024-11-17 14:37:05.371059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.166 [2024-11-17 14:37:05.371066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.166 [2024-11-17 14:37:05.371078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.166 [2024-11-17 14:37:05.371093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.166 qpair failed and we were unable to recover it. 00:27:16.166 [2024-11-17 14:37:05.381064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.166 [2024-11-17 14:37:05.381172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.166 [2024-11-17 14:37:05.381188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.166 [2024-11-17 14:37:05.381195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.166 [2024-11-17 14:37:05.381202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.166 [2024-11-17 14:37:05.381217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.166 qpair failed and we were unable to recover it. 00:27:16.426 [2024-11-17 14:37:05.391035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.426 [2024-11-17 14:37:05.391090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.426 [2024-11-17 14:37:05.391104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.426 [2024-11-17 14:37:05.391111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.426 [2024-11-17 14:37:05.391118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.426 [2024-11-17 14:37:05.391134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.426 qpair failed and we were unable to recover it. 00:27:16.426 [2024-11-17 14:37:05.401072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.426 [2024-11-17 14:37:05.401122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.426 [2024-11-17 14:37:05.401137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.426 [2024-11-17 14:37:05.401144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.426 [2024-11-17 14:37:05.401151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.426 [2024-11-17 14:37:05.401165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.426 qpair failed and we were unable to recover it. 00:27:16.426 [2024-11-17 14:37:05.411112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.426 [2024-11-17 14:37:05.411170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.426 [2024-11-17 14:37:05.411184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.426 [2024-11-17 14:37:05.411191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.426 [2024-11-17 14:37:05.411197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.426 [2024-11-17 14:37:05.411212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.426 qpair failed and we were unable to recover it. 00:27:16.426 [2024-11-17 14:37:05.421130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.426 [2024-11-17 14:37:05.421182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.426 [2024-11-17 14:37:05.421196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.426 [2024-11-17 14:37:05.421203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.426 [2024-11-17 14:37:05.421209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.426 [2024-11-17 14:37:05.421225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.426 qpair failed and we were unable to recover it. 00:27:16.426 [2024-11-17 14:37:05.431130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.426 [2024-11-17 14:37:05.431190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.426 [2024-11-17 14:37:05.431204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.426 [2024-11-17 14:37:05.431211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.426 [2024-11-17 14:37:05.431217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.427 [2024-11-17 14:37:05.431232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.427 qpair failed and we were unable to recover it. 00:27:16.427 [2024-11-17 14:37:05.441119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.427 [2024-11-17 14:37:05.441174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.427 [2024-11-17 14:37:05.441188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.427 [2024-11-17 14:37:05.441195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.427 [2024-11-17 14:37:05.441202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.427 [2024-11-17 14:37:05.441217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.427 qpair failed and we were unable to recover it. 00:27:16.427 [2024-11-17 14:37:05.451239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.427 [2024-11-17 14:37:05.451293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.427 [2024-11-17 14:37:05.451306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.427 [2024-11-17 14:37:05.451313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.427 [2024-11-17 14:37:05.451319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.427 [2024-11-17 14:37:05.451335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.427 qpair failed and we were unable to recover it. 00:27:16.427 [2024-11-17 14:37:05.461264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.427 [2024-11-17 14:37:05.461320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.427 [2024-11-17 14:37:05.461337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.427 [2024-11-17 14:37:05.461344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.427 [2024-11-17 14:37:05.461350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.427 [2024-11-17 14:37:05.461369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.427 qpair failed and we were unable to recover it. 00:27:16.427 [2024-11-17 14:37:05.471272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.427 [2024-11-17 14:37:05.471328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.427 [2024-11-17 14:37:05.471343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.427 [2024-11-17 14:37:05.471350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.427 [2024-11-17 14:37:05.471360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.427 [2024-11-17 14:37:05.471377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.427 qpair failed and we were unable to recover it. 00:27:16.427 [2024-11-17 14:37:05.481316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.427 [2024-11-17 14:37:05.481368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.427 [2024-11-17 14:37:05.481382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.427 [2024-11-17 14:37:05.481389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.427 [2024-11-17 14:37:05.481396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.427 [2024-11-17 14:37:05.481411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.427 qpair failed and we were unable to recover it. 00:27:16.427 [2024-11-17 14:37:05.491279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.427 [2024-11-17 14:37:05.491342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.427 [2024-11-17 14:37:05.491359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.427 [2024-11-17 14:37:05.491367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.427 [2024-11-17 14:37:05.491373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.427 [2024-11-17 14:37:05.491389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.427 qpair failed and we were unable to recover it. 00:27:16.427 [2024-11-17 14:37:05.501361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.427 [2024-11-17 14:37:05.501421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.427 [2024-11-17 14:37:05.501436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.427 [2024-11-17 14:37:05.501443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.427 [2024-11-17 14:37:05.501453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.427 [2024-11-17 14:37:05.501468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.427 qpair failed and we were unable to recover it. 00:27:16.427 [2024-11-17 14:37:05.511403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.427 [2024-11-17 14:37:05.511456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.427 [2024-11-17 14:37:05.511471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.427 [2024-11-17 14:37:05.511479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.427 [2024-11-17 14:37:05.511486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.427 [2024-11-17 14:37:05.511503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.427 qpair failed and we were unable to recover it. 00:27:16.427 [2024-11-17 14:37:05.521414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.427 [2024-11-17 14:37:05.521464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.427 [2024-11-17 14:37:05.521478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.427 [2024-11-17 14:37:05.521486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.427 [2024-11-17 14:37:05.521492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.427 [2024-11-17 14:37:05.521508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.427 qpair failed and we were unable to recover it. 00:27:16.427 [2024-11-17 14:37:05.531465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.427 [2024-11-17 14:37:05.531520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.427 [2024-11-17 14:37:05.531534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.427 [2024-11-17 14:37:05.531541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.427 [2024-11-17 14:37:05.531548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.427 [2024-11-17 14:37:05.531563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.427 qpair failed and we were unable to recover it. 00:27:16.427 [2024-11-17 14:37:05.541490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.427 [2024-11-17 14:37:05.541547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.427 [2024-11-17 14:37:05.541562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.427 [2024-11-17 14:37:05.541570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.427 [2024-11-17 14:37:05.541576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.427 [2024-11-17 14:37:05.541592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.427 qpair failed and we were unable to recover it. 00:27:16.427 [2024-11-17 14:37:05.551447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.427 [2024-11-17 14:37:05.551507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.427 [2024-11-17 14:37:05.551521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.427 [2024-11-17 14:37:05.551528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.427 [2024-11-17 14:37:05.551534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.427 [2024-11-17 14:37:05.551550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.427 qpair failed and we were unable to recover it. 00:27:16.427 [2024-11-17 14:37:05.561555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.427 [2024-11-17 14:37:05.561626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.427 [2024-11-17 14:37:05.561640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.427 [2024-11-17 14:37:05.561647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.427 [2024-11-17 14:37:05.561653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.428 [2024-11-17 14:37:05.561669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.428 qpair failed and we were unable to recover it. 00:27:16.428 [2024-11-17 14:37:05.571581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.428 [2024-11-17 14:37:05.571636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.428 [2024-11-17 14:37:05.571650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.428 [2024-11-17 14:37:05.571657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.428 [2024-11-17 14:37:05.571663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.428 [2024-11-17 14:37:05.571678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.428 qpair failed and we were unable to recover it. 00:27:16.428 [2024-11-17 14:37:05.581599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.428 [2024-11-17 14:37:05.581651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.428 [2024-11-17 14:37:05.581665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.428 [2024-11-17 14:37:05.581672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.428 [2024-11-17 14:37:05.581678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.428 [2024-11-17 14:37:05.581693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.428 qpair failed and we were unable to recover it. 00:27:16.428 [2024-11-17 14:37:05.591683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.428 [2024-11-17 14:37:05.591785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.428 [2024-11-17 14:37:05.591802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.428 [2024-11-17 14:37:05.591809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.428 [2024-11-17 14:37:05.591816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.428 [2024-11-17 14:37:05.591831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.428 qpair failed and we were unable to recover it. 00:27:16.428 [2024-11-17 14:37:05.601655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.428 [2024-11-17 14:37:05.601709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.428 [2024-11-17 14:37:05.601723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.428 [2024-11-17 14:37:05.601730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.428 [2024-11-17 14:37:05.601736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.428 [2024-11-17 14:37:05.601751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.428 qpair failed and we were unable to recover it. 00:27:16.428 [2024-11-17 14:37:05.611682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.428 [2024-11-17 14:37:05.611737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.428 [2024-11-17 14:37:05.611750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.428 [2024-11-17 14:37:05.611757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.428 [2024-11-17 14:37:05.611764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.428 [2024-11-17 14:37:05.611779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.428 qpair failed and we were unable to recover it. 00:27:16.428 [2024-11-17 14:37:05.621646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.428 [2024-11-17 14:37:05.621709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.428 [2024-11-17 14:37:05.621722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.428 [2024-11-17 14:37:05.621730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.428 [2024-11-17 14:37:05.621736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.428 [2024-11-17 14:37:05.621751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.428 qpair failed and we were unable to recover it. 00:27:16.428 [2024-11-17 14:37:05.631728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.428 [2024-11-17 14:37:05.631786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.428 [2024-11-17 14:37:05.631800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.428 [2024-11-17 14:37:05.631810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.428 [2024-11-17 14:37:05.631817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.428 [2024-11-17 14:37:05.631832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.428 qpair failed and we were unable to recover it. 00:27:16.428 [2024-11-17 14:37:05.641682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.428 [2024-11-17 14:37:05.641735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.428 [2024-11-17 14:37:05.641749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.428 [2024-11-17 14:37:05.641756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.428 [2024-11-17 14:37:05.641763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.428 [2024-11-17 14:37:05.641778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.428 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-17 14:37:05.651809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.689 [2024-11-17 14:37:05.651873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.689 [2024-11-17 14:37:05.651889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.689 [2024-11-17 14:37:05.651897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.689 [2024-11-17 14:37:05.651904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.689 [2024-11-17 14:37:05.651920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-17 14:37:05.661819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.689 [2024-11-17 14:37:05.661870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.689 [2024-11-17 14:37:05.661884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.689 [2024-11-17 14:37:05.661890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.689 [2024-11-17 14:37:05.661897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.689 [2024-11-17 14:37:05.661913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-17 14:37:05.671844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.689 [2024-11-17 14:37:05.671896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.689 [2024-11-17 14:37:05.671910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.689 [2024-11-17 14:37:05.671917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.689 [2024-11-17 14:37:05.671924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.689 [2024-11-17 14:37:05.671938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-17 14:37:05.681859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.689 [2024-11-17 14:37:05.681921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.689 [2024-11-17 14:37:05.681934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.689 [2024-11-17 14:37:05.681942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.689 [2024-11-17 14:37:05.681948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.689 [2024-11-17 14:37:05.681964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-17 14:37:05.691894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.689 [2024-11-17 14:37:05.691950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.689 [2024-11-17 14:37:05.691963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.689 [2024-11-17 14:37:05.691970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.689 [2024-11-17 14:37:05.691977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.690 [2024-11-17 14:37:05.691991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-17 14:37:05.701938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.690 [2024-11-17 14:37:05.701991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.690 [2024-11-17 14:37:05.702005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.690 [2024-11-17 14:37:05.702012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.690 [2024-11-17 14:37:05.702018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.690 [2024-11-17 14:37:05.702033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-17 14:37:05.711985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.690 [2024-11-17 14:37:05.712039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.690 [2024-11-17 14:37:05.712053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.690 [2024-11-17 14:37:05.712059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.690 [2024-11-17 14:37:05.712066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.690 [2024-11-17 14:37:05.712081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-17 14:37:05.721976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.690 [2024-11-17 14:37:05.722036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.690 [2024-11-17 14:37:05.722050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.690 [2024-11-17 14:37:05.722057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.690 [2024-11-17 14:37:05.722065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.690 [2024-11-17 14:37:05.722080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-17 14:37:05.732014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.690 [2024-11-17 14:37:05.732095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.690 [2024-11-17 14:37:05.732108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.690 [2024-11-17 14:37:05.732116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.690 [2024-11-17 14:37:05.732122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.690 [2024-11-17 14:37:05.732137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-17 14:37:05.742011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.690 [2024-11-17 14:37:05.742069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.690 [2024-11-17 14:37:05.742082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.690 [2024-11-17 14:37:05.742089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.690 [2024-11-17 14:37:05.742096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.690 [2024-11-17 14:37:05.742111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-17 14:37:05.752070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.690 [2024-11-17 14:37:05.752128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.690 [2024-11-17 14:37:05.752142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.690 [2024-11-17 14:37:05.752149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.690 [2024-11-17 14:37:05.752156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.690 [2024-11-17 14:37:05.752172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-17 14:37:05.762093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.690 [2024-11-17 14:37:05.762142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.690 [2024-11-17 14:37:05.762155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.690 [2024-11-17 14:37:05.762166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.690 [2024-11-17 14:37:05.762173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.690 [2024-11-17 14:37:05.762188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-17 14:37:05.772164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.690 [2024-11-17 14:37:05.772222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.690 [2024-11-17 14:37:05.772236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.690 [2024-11-17 14:37:05.772243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.690 [2024-11-17 14:37:05.772250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.690 [2024-11-17 14:37:05.772265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-17 14:37:05.782162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.690 [2024-11-17 14:37:05.782217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.690 [2024-11-17 14:37:05.782231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.690 [2024-11-17 14:37:05.782238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.690 [2024-11-17 14:37:05.782245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.690 [2024-11-17 14:37:05.782260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-17 14:37:05.792098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.690 [2024-11-17 14:37:05.792163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.690 [2024-11-17 14:37:05.792177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.690 [2024-11-17 14:37:05.792184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.690 [2024-11-17 14:37:05.792190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.690 [2024-11-17 14:37:05.792206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-17 14:37:05.802209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.690 [2024-11-17 14:37:05.802313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.690 [2024-11-17 14:37:05.802327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.690 [2024-11-17 14:37:05.802335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.690 [2024-11-17 14:37:05.802342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.690 [2024-11-17 14:37:05.802364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-17 14:37:05.812260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.690 [2024-11-17 14:37:05.812322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.690 [2024-11-17 14:37:05.812336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.690 [2024-11-17 14:37:05.812344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.690 [2024-11-17 14:37:05.812350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.690 [2024-11-17 14:37:05.812371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-17 14:37:05.822279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.690 [2024-11-17 14:37:05.822332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.690 [2024-11-17 14:37:05.822346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.690 [2024-11-17 14:37:05.822357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.690 [2024-11-17 14:37:05.822364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.690 [2024-11-17 14:37:05.822379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-17 14:37:05.832286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.691 [2024-11-17 14:37:05.832343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.691 [2024-11-17 14:37:05.832361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.691 [2024-11-17 14:37:05.832368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.691 [2024-11-17 14:37:05.832376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.691 [2024-11-17 14:37:05.832391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-17 14:37:05.842355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.691 [2024-11-17 14:37:05.842411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.691 [2024-11-17 14:37:05.842427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.691 [2024-11-17 14:37:05.842434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.691 [2024-11-17 14:37:05.842441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.691 [2024-11-17 14:37:05.842458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-17 14:37:05.852364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.691 [2024-11-17 14:37:05.852421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.691 [2024-11-17 14:37:05.852435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.691 [2024-11-17 14:37:05.852442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.691 [2024-11-17 14:37:05.852449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.691 [2024-11-17 14:37:05.852464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-17 14:37:05.862438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.691 [2024-11-17 14:37:05.862542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.691 [2024-11-17 14:37:05.862556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.691 [2024-11-17 14:37:05.862563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.691 [2024-11-17 14:37:05.862570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.691 [2024-11-17 14:37:05.862585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-17 14:37:05.872406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.691 [2024-11-17 14:37:05.872461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.691 [2024-11-17 14:37:05.872475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.691 [2024-11-17 14:37:05.872482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.691 [2024-11-17 14:37:05.872489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.691 [2024-11-17 14:37:05.872504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-17 14:37:05.882431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.691 [2024-11-17 14:37:05.882485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.691 [2024-11-17 14:37:05.882498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.691 [2024-11-17 14:37:05.882505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.691 [2024-11-17 14:37:05.882512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.691 [2024-11-17 14:37:05.882527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-17 14:37:05.892471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.691 [2024-11-17 14:37:05.892528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.691 [2024-11-17 14:37:05.892545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.691 [2024-11-17 14:37:05.892553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.691 [2024-11-17 14:37:05.892559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.691 [2024-11-17 14:37:05.892574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-17 14:37:05.902536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.691 [2024-11-17 14:37:05.902590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.691 [2024-11-17 14:37:05.902604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.691 [2024-11-17 14:37:05.902611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.691 [2024-11-17 14:37:05.902617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.691 [2024-11-17 14:37:05.902632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.952 [2024-11-17 14:37:05.912457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.952 [2024-11-17 14:37:05.912516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.952 [2024-11-17 14:37:05.912530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.952 [2024-11-17 14:37:05.912537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.952 [2024-11-17 14:37:05.912545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.952 [2024-11-17 14:37:05.912560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.952 qpair failed and we were unable to recover it. 00:27:16.952 [2024-11-17 14:37:05.922566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.952 [2024-11-17 14:37:05.922662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.952 [2024-11-17 14:37:05.922675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.952 [2024-11-17 14:37:05.922682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.952 [2024-11-17 14:37:05.922688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.952 [2024-11-17 14:37:05.922704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.952 qpair failed and we were unable to recover it. 00:27:16.952 [2024-11-17 14:37:05.932526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.952 [2024-11-17 14:37:05.932583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.952 [2024-11-17 14:37:05.932596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.952 [2024-11-17 14:37:05.932603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.952 [2024-11-17 14:37:05.932614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.952 [2024-11-17 14:37:05.932629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.952 qpair failed and we were unable to recover it. 00:27:16.952 [2024-11-17 14:37:05.942670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.952 [2024-11-17 14:37:05.942725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.952 [2024-11-17 14:37:05.942739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.952 [2024-11-17 14:37:05.942746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.952 [2024-11-17 14:37:05.942753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.952 [2024-11-17 14:37:05.942769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.952 qpair failed and we were unable to recover it. 00:27:16.952 [2024-11-17 14:37:05.952648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.952 [2024-11-17 14:37:05.952705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.952 [2024-11-17 14:37:05.952721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.952 [2024-11-17 14:37:05.952728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.952 [2024-11-17 14:37:05.952735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.952 [2024-11-17 14:37:05.952750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.952 qpair failed and we were unable to recover it. 00:27:16.952 [2024-11-17 14:37:05.962671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.952 [2024-11-17 14:37:05.962728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.952 [2024-11-17 14:37:05.962741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.952 [2024-11-17 14:37:05.962749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.952 [2024-11-17 14:37:05.962755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.952 [2024-11-17 14:37:05.962771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.952 qpair failed and we were unable to recover it. 00:27:16.952 [2024-11-17 14:37:05.972718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.952 [2024-11-17 14:37:05.972781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.952 [2024-11-17 14:37:05.972794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.952 [2024-11-17 14:37:05.972802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.952 [2024-11-17 14:37:05.972808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.952 [2024-11-17 14:37:05.972823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.952 qpair failed and we were unable to recover it. 00:27:16.952 [2024-11-17 14:37:05.982731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.952 [2024-11-17 14:37:05.982785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.952 [2024-11-17 14:37:05.982800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.952 [2024-11-17 14:37:05.982807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.952 [2024-11-17 14:37:05.982813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.952 [2024-11-17 14:37:05.982828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.952 qpair failed and we were unable to recover it. 00:27:16.952 [2024-11-17 14:37:05.992783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.953 [2024-11-17 14:37:05.992833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.953 [2024-11-17 14:37:05.992847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.953 [2024-11-17 14:37:05.992854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.953 [2024-11-17 14:37:05.992860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.953 [2024-11-17 14:37:05.992876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.953 qpair failed and we were unable to recover it. 00:27:16.953 [2024-11-17 14:37:06.002782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.953 [2024-11-17 14:37:06.002835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.953 [2024-11-17 14:37:06.002849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.953 [2024-11-17 14:37:06.002856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.953 [2024-11-17 14:37:06.002863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.953 [2024-11-17 14:37:06.002878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.953 qpair failed and we were unable to recover it. 00:27:16.953 [2024-11-17 14:37:06.012835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.953 [2024-11-17 14:37:06.012895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.953 [2024-11-17 14:37:06.012910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.953 [2024-11-17 14:37:06.012917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.953 [2024-11-17 14:37:06.012925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.953 [2024-11-17 14:37:06.012940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.953 qpair failed and we were unable to recover it. 00:27:16.953 [2024-11-17 14:37:06.022845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.953 [2024-11-17 14:37:06.022896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.953 [2024-11-17 14:37:06.022913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.953 [2024-11-17 14:37:06.022920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.953 [2024-11-17 14:37:06.022927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.953 [2024-11-17 14:37:06.022942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.953 qpair failed and we were unable to recover it. 00:27:16.953 [2024-11-17 14:37:06.032874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.953 [2024-11-17 14:37:06.032931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.953 [2024-11-17 14:37:06.032945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.953 [2024-11-17 14:37:06.032953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.953 [2024-11-17 14:37:06.032959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.953 [2024-11-17 14:37:06.032975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.953 qpair failed and we were unable to recover it. 00:27:16.953 [2024-11-17 14:37:06.042921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.953 [2024-11-17 14:37:06.042974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.953 [2024-11-17 14:37:06.042988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.953 [2024-11-17 14:37:06.042995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.953 [2024-11-17 14:37:06.043002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.953 [2024-11-17 14:37:06.043017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.953 qpair failed and we were unable to recover it. 00:27:16.953 [2024-11-17 14:37:06.052981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.953 [2024-11-17 14:37:06.053035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.953 [2024-11-17 14:37:06.053048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.953 [2024-11-17 14:37:06.053055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.953 [2024-11-17 14:37:06.053061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.953 [2024-11-17 14:37:06.053076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.953 qpair failed and we were unable to recover it. 00:27:16.953 [2024-11-17 14:37:06.062954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.953 [2024-11-17 14:37:06.063006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.953 [2024-11-17 14:37:06.063020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.953 [2024-11-17 14:37:06.063027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.953 [2024-11-17 14:37:06.063042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.953 [2024-11-17 14:37:06.063057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.953 qpair failed and we were unable to recover it. 00:27:16.953 [2024-11-17 14:37:06.072987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.953 [2024-11-17 14:37:06.073036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.953 [2024-11-17 14:37:06.073049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.953 [2024-11-17 14:37:06.073056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.953 [2024-11-17 14:37:06.073062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.953 [2024-11-17 14:37:06.073078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.953 qpair failed and we were unable to recover it. 00:27:16.953 [2024-11-17 14:37:06.083012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.953 [2024-11-17 14:37:06.083071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.953 [2024-11-17 14:37:06.083084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.953 [2024-11-17 14:37:06.083091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.953 [2024-11-17 14:37:06.083098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.953 [2024-11-17 14:37:06.083113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.953 qpair failed and we were unable to recover it. 00:27:16.953 [2024-11-17 14:37:06.093061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.953 [2024-11-17 14:37:06.093116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.953 [2024-11-17 14:37:06.093131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.953 [2024-11-17 14:37:06.093137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.953 [2024-11-17 14:37:06.093144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.953 [2024-11-17 14:37:06.093159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.953 qpair failed and we were unable to recover it. 00:27:16.953 [2024-11-17 14:37:06.103082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.953 [2024-11-17 14:37:06.103139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.953 [2024-11-17 14:37:06.103153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.953 [2024-11-17 14:37:06.103160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.953 [2024-11-17 14:37:06.103167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.953 [2024-11-17 14:37:06.103183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.953 qpair failed and we were unable to recover it. 00:27:16.953 [2024-11-17 14:37:06.113113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.953 [2024-11-17 14:37:06.113165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.953 [2024-11-17 14:37:06.113180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.953 [2024-11-17 14:37:06.113187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.953 [2024-11-17 14:37:06.113194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.953 [2024-11-17 14:37:06.113209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.953 qpair failed and we were unable to recover it. 00:27:16.954 [2024-11-17 14:37:06.123175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.954 [2024-11-17 14:37:06.123233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.954 [2024-11-17 14:37:06.123247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.954 [2024-11-17 14:37:06.123254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.954 [2024-11-17 14:37:06.123261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.954 [2024-11-17 14:37:06.123276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.954 qpair failed and we were unable to recover it. 00:27:16.954 [2024-11-17 14:37:06.133162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.954 [2024-11-17 14:37:06.133217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.954 [2024-11-17 14:37:06.133231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.954 [2024-11-17 14:37:06.133238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.954 [2024-11-17 14:37:06.133244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.954 [2024-11-17 14:37:06.133260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.954 qpair failed and we were unable to recover it. 00:27:16.954 [2024-11-17 14:37:06.143206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.954 [2024-11-17 14:37:06.143268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.954 [2024-11-17 14:37:06.143282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.954 [2024-11-17 14:37:06.143290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.954 [2024-11-17 14:37:06.143297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.954 [2024-11-17 14:37:06.143313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.954 qpair failed and we were unable to recover it. 00:27:16.954 [2024-11-17 14:37:06.153226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.954 [2024-11-17 14:37:06.153282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.954 [2024-11-17 14:37:06.153300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.954 [2024-11-17 14:37:06.153308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.954 [2024-11-17 14:37:06.153314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.954 [2024-11-17 14:37:06.153329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.954 qpair failed and we were unable to recover it. 00:27:16.954 [2024-11-17 14:37:06.163294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.954 [2024-11-17 14:37:06.163350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.954 [2024-11-17 14:37:06.163376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.954 [2024-11-17 14:37:06.163384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.954 [2024-11-17 14:37:06.163391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:16.954 [2024-11-17 14:37:06.163408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.954 qpair failed and we were unable to recover it. 00:27:17.215 [2024-11-17 14:37:06.173331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.215 [2024-11-17 14:37:06.173438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.215 [2024-11-17 14:37:06.173452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.215 [2024-11-17 14:37:06.173459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.215 [2024-11-17 14:37:06.173466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.215 [2024-11-17 14:37:06.173482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.215 qpair failed and we were unable to recover it. 00:27:17.215 [2024-11-17 14:37:06.183336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.215 [2024-11-17 14:37:06.183404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.215 [2024-11-17 14:37:06.183419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.215 [2024-11-17 14:37:06.183426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.215 [2024-11-17 14:37:06.183433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.215 [2024-11-17 14:37:06.183448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.215 qpair failed and we were unable to recover it. 00:27:17.215 [2024-11-17 14:37:06.193335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.215 [2024-11-17 14:37:06.193390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.216 [2024-11-17 14:37:06.193404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.216 [2024-11-17 14:37:06.193415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.216 [2024-11-17 14:37:06.193421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.216 [2024-11-17 14:37:06.193437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.216 qpair failed and we were unable to recover it. 00:27:17.216 [2024-11-17 14:37:06.203357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.216 [2024-11-17 14:37:06.203410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.216 [2024-11-17 14:37:06.203423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.216 [2024-11-17 14:37:06.203430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.216 [2024-11-17 14:37:06.203437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.216 [2024-11-17 14:37:06.203452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.216 qpair failed and we were unable to recover it. 00:27:17.216 [2024-11-17 14:37:06.213400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.216 [2024-11-17 14:37:06.213454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.216 [2024-11-17 14:37:06.213467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.216 [2024-11-17 14:37:06.213474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.216 [2024-11-17 14:37:06.213481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.216 [2024-11-17 14:37:06.213496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.216 qpair failed and we were unable to recover it. 00:27:17.216 [2024-11-17 14:37:06.223435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.216 [2024-11-17 14:37:06.223493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.216 [2024-11-17 14:37:06.223507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.216 [2024-11-17 14:37:06.223514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.216 [2024-11-17 14:37:06.223521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.216 [2024-11-17 14:37:06.223536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.216 qpair failed and we were unable to recover it. 00:27:17.216 [2024-11-17 14:37:06.233506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.216 [2024-11-17 14:37:06.233566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.216 [2024-11-17 14:37:06.233579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.216 [2024-11-17 14:37:06.233587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.216 [2024-11-17 14:37:06.233593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.216 [2024-11-17 14:37:06.233609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.216 qpair failed and we were unable to recover it. 00:27:17.216 [2024-11-17 14:37:06.243527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.216 [2024-11-17 14:37:06.243581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.216 [2024-11-17 14:37:06.243595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.216 [2024-11-17 14:37:06.243602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.216 [2024-11-17 14:37:06.243608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.216 [2024-11-17 14:37:06.243623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.216 qpair failed and we were unable to recover it. 00:27:17.216 [2024-11-17 14:37:06.253531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.216 [2024-11-17 14:37:06.253589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.216 [2024-11-17 14:37:06.253603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.216 [2024-11-17 14:37:06.253610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.216 [2024-11-17 14:37:06.253617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.216 [2024-11-17 14:37:06.253632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.216 qpair failed and we were unable to recover it. 00:27:17.216 [2024-11-17 14:37:06.263598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.216 [2024-11-17 14:37:06.263664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.216 [2024-11-17 14:37:06.263678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.216 [2024-11-17 14:37:06.263685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.216 [2024-11-17 14:37:06.263692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.216 [2024-11-17 14:37:06.263707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.216 qpair failed and we were unable to recover it. 00:27:17.216 [2024-11-17 14:37:06.273515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.216 [2024-11-17 14:37:06.273573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.216 [2024-11-17 14:37:06.273587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.216 [2024-11-17 14:37:06.273595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.216 [2024-11-17 14:37:06.273602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.216 [2024-11-17 14:37:06.273617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.216 qpair failed and we were unable to recover it. 00:27:17.216 [2024-11-17 14:37:06.283609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.216 [2024-11-17 14:37:06.283686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.216 [2024-11-17 14:37:06.283701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.216 [2024-11-17 14:37:06.283708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.216 [2024-11-17 14:37:06.283714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.216 [2024-11-17 14:37:06.283730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.216 qpair failed and we were unable to recover it. 00:27:17.216 [2024-11-17 14:37:06.293582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.216 [2024-11-17 14:37:06.293641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.216 [2024-11-17 14:37:06.293655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.216 [2024-11-17 14:37:06.293662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.216 [2024-11-17 14:37:06.293669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.216 [2024-11-17 14:37:06.293684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.216 qpair failed and we were unable to recover it. 00:27:17.216 [2024-11-17 14:37:06.303677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.216 [2024-11-17 14:37:06.303734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.216 [2024-11-17 14:37:06.303748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.216 [2024-11-17 14:37:06.303755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.216 [2024-11-17 14:37:06.303761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.216 [2024-11-17 14:37:06.303777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.216 qpair failed and we were unable to recover it. 00:27:17.216 [2024-11-17 14:37:06.313687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.216 [2024-11-17 14:37:06.313742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.216 [2024-11-17 14:37:06.313756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.216 [2024-11-17 14:37:06.313763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.216 [2024-11-17 14:37:06.313770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.216 [2024-11-17 14:37:06.313785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.216 qpair failed and we were unable to recover it. 00:27:17.217 [2024-11-17 14:37:06.323696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.217 [2024-11-17 14:37:06.323759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.217 [2024-11-17 14:37:06.323773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.217 [2024-11-17 14:37:06.323784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.217 [2024-11-17 14:37:06.323791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.217 [2024-11-17 14:37:06.323806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.217 qpair failed and we were unable to recover it. 00:27:17.217 [2024-11-17 14:37:06.333740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.217 [2024-11-17 14:37:06.333798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.217 [2024-11-17 14:37:06.333812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.217 [2024-11-17 14:37:06.333819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.217 [2024-11-17 14:37:06.333826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.217 [2024-11-17 14:37:06.333841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.217 qpair failed and we were unable to recover it. 00:27:17.217 [2024-11-17 14:37:06.343718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.217 [2024-11-17 14:37:06.343769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.217 [2024-11-17 14:37:06.343783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.217 [2024-11-17 14:37:06.343790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.217 [2024-11-17 14:37:06.343796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.217 [2024-11-17 14:37:06.343812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.217 qpair failed and we were unable to recover it. 00:27:17.217 [2024-11-17 14:37:06.353783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.217 [2024-11-17 14:37:06.353878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.217 [2024-11-17 14:37:06.353891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.217 [2024-11-17 14:37:06.353898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.217 [2024-11-17 14:37:06.353905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.217 [2024-11-17 14:37:06.353922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.217 qpair failed and we were unable to recover it. 00:27:17.217 [2024-11-17 14:37:06.363772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.217 [2024-11-17 14:37:06.363828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.217 [2024-11-17 14:37:06.363842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.217 [2024-11-17 14:37:06.363849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.217 [2024-11-17 14:37:06.363856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.217 [2024-11-17 14:37:06.363875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.217 qpair failed and we were unable to recover it. 00:27:17.217 [2024-11-17 14:37:06.373873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.217 [2024-11-17 14:37:06.373929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.217 [2024-11-17 14:37:06.373942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.217 [2024-11-17 14:37:06.373949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.217 [2024-11-17 14:37:06.373956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.217 [2024-11-17 14:37:06.373971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.217 qpair failed and we were unable to recover it. 00:27:17.217 [2024-11-17 14:37:06.383887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.217 [2024-11-17 14:37:06.383938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.217 [2024-11-17 14:37:06.383951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.217 [2024-11-17 14:37:06.383958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.217 [2024-11-17 14:37:06.383964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.217 [2024-11-17 14:37:06.383980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.217 qpair failed and we were unable to recover it. 00:27:17.217 [2024-11-17 14:37:06.393934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.217 [2024-11-17 14:37:06.393989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.217 [2024-11-17 14:37:06.394002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.217 [2024-11-17 14:37:06.394009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.217 [2024-11-17 14:37:06.394016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.217 [2024-11-17 14:37:06.394030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.217 qpair failed and we were unable to recover it. 00:27:17.217 [2024-11-17 14:37:06.403928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.217 [2024-11-17 14:37:06.403981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.217 [2024-11-17 14:37:06.403995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.217 [2024-11-17 14:37:06.404002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.217 [2024-11-17 14:37:06.404009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.217 [2024-11-17 14:37:06.404024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.217 qpair failed and we were unable to recover it. 00:27:17.217 [2024-11-17 14:37:06.413900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.217 [2024-11-17 14:37:06.413959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.217 [2024-11-17 14:37:06.413974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.217 [2024-11-17 14:37:06.413981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.217 [2024-11-17 14:37:06.413988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.217 [2024-11-17 14:37:06.414003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.217 qpair failed and we were unable to recover it. 00:27:17.217 [2024-11-17 14:37:06.423990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.217 [2024-11-17 14:37:06.424081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.217 [2024-11-17 14:37:06.424095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.217 [2024-11-17 14:37:06.424102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.217 [2024-11-17 14:37:06.424109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.217 [2024-11-17 14:37:06.424124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.217 qpair failed and we were unable to recover it. 00:27:17.217 [2024-11-17 14:37:06.433960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.217 [2024-11-17 14:37:06.434044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.217 [2024-11-17 14:37:06.434058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.217 [2024-11-17 14:37:06.434065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.217 [2024-11-17 14:37:06.434071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.217 [2024-11-17 14:37:06.434086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.217 qpair failed and we were unable to recover it. 00:27:17.479 [2024-11-17 14:37:06.444044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.479 [2024-11-17 14:37:06.444103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.479 [2024-11-17 14:37:06.444117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.479 [2024-11-17 14:37:06.444125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.479 [2024-11-17 14:37:06.444131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.479 [2024-11-17 14:37:06.444147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-11-17 14:37:06.454127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.479 [2024-11-17 14:37:06.454185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.479 [2024-11-17 14:37:06.454203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.479 [2024-11-17 14:37:06.454210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.479 [2024-11-17 14:37:06.454216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.479 [2024-11-17 14:37:06.454232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-11-17 14:37:06.464118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.479 [2024-11-17 14:37:06.464177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.479 [2024-11-17 14:37:06.464191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.479 [2024-11-17 14:37:06.464198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.479 [2024-11-17 14:37:06.464204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.479 [2024-11-17 14:37:06.464220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-11-17 14:37:06.474130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.479 [2024-11-17 14:37:06.474186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.479 [2024-11-17 14:37:06.474200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.479 [2024-11-17 14:37:06.474207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.479 [2024-11-17 14:37:06.474214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.479 [2024-11-17 14:37:06.474229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-11-17 14:37:06.484181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.479 [2024-11-17 14:37:06.484239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.479 [2024-11-17 14:37:06.484252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.479 [2024-11-17 14:37:06.484259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.479 [2024-11-17 14:37:06.484266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.479 [2024-11-17 14:37:06.484282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-11-17 14:37:06.494254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.479 [2024-11-17 14:37:06.494314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.479 [2024-11-17 14:37:06.494328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.479 [2024-11-17 14:37:06.494335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.479 [2024-11-17 14:37:06.494346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.479 [2024-11-17 14:37:06.494367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-11-17 14:37:06.504229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.479 [2024-11-17 14:37:06.504329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.479 [2024-11-17 14:37:06.504343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.479 [2024-11-17 14:37:06.504350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.479 [2024-11-17 14:37:06.504361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.479 [2024-11-17 14:37:06.504377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-11-17 14:37:06.514246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.479 [2024-11-17 14:37:06.514299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.479 [2024-11-17 14:37:06.514312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.479 [2024-11-17 14:37:06.514319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.479 [2024-11-17 14:37:06.514326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.479 [2024-11-17 14:37:06.514341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-11-17 14:37:06.524282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.479 [2024-11-17 14:37:06.524355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.479 [2024-11-17 14:37:06.524369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.479 [2024-11-17 14:37:06.524376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.479 [2024-11-17 14:37:06.524382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.479 [2024-11-17 14:37:06.524398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-11-17 14:37:06.534316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.479 [2024-11-17 14:37:06.534372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.480 [2024-11-17 14:37:06.534386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.480 [2024-11-17 14:37:06.534393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.480 [2024-11-17 14:37:06.534400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.480 [2024-11-17 14:37:06.534415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-11-17 14:37:06.544341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.480 [2024-11-17 14:37:06.544401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.480 [2024-11-17 14:37:06.544415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.480 [2024-11-17 14:37:06.544422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.480 [2024-11-17 14:37:06.544428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.480 [2024-11-17 14:37:06.544443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-11-17 14:37:06.554291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.480 [2024-11-17 14:37:06.554341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.480 [2024-11-17 14:37:06.554358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.480 [2024-11-17 14:37:06.554366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.480 [2024-11-17 14:37:06.554372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.480 [2024-11-17 14:37:06.554387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-11-17 14:37:06.564435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.480 [2024-11-17 14:37:06.564504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.480 [2024-11-17 14:37:06.564519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.480 [2024-11-17 14:37:06.564527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.480 [2024-11-17 14:37:06.564535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.480 [2024-11-17 14:37:06.564550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-11-17 14:37:06.574437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.480 [2024-11-17 14:37:06.574493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.480 [2024-11-17 14:37:06.574508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.480 [2024-11-17 14:37:06.574516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.480 [2024-11-17 14:37:06.574523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.480 [2024-11-17 14:37:06.574538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-11-17 14:37:06.584451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.480 [2024-11-17 14:37:06.584509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.480 [2024-11-17 14:37:06.584527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.480 [2024-11-17 14:37:06.584535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.480 [2024-11-17 14:37:06.584541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.480 [2024-11-17 14:37:06.584558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-11-17 14:37:06.594499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.480 [2024-11-17 14:37:06.594571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.480 [2024-11-17 14:37:06.594587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.480 [2024-11-17 14:37:06.594594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.480 [2024-11-17 14:37:06.594601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.480 [2024-11-17 14:37:06.594617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-11-17 14:37:06.604509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.480 [2024-11-17 14:37:06.604564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.480 [2024-11-17 14:37:06.604578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.480 [2024-11-17 14:37:06.604586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.480 [2024-11-17 14:37:06.604593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.480 [2024-11-17 14:37:06.604609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-11-17 14:37:06.614477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.480 [2024-11-17 14:37:06.614532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.480 [2024-11-17 14:37:06.614546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.480 [2024-11-17 14:37:06.614553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.480 [2024-11-17 14:37:06.614559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.480 [2024-11-17 14:37:06.614575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-11-17 14:37:06.624508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.480 [2024-11-17 14:37:06.624566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.480 [2024-11-17 14:37:06.624580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.480 [2024-11-17 14:37:06.624587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.480 [2024-11-17 14:37:06.624597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.480 [2024-11-17 14:37:06.624612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-11-17 14:37:06.634589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.480 [2024-11-17 14:37:06.634646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.480 [2024-11-17 14:37:06.634659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.480 [2024-11-17 14:37:06.634667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.480 [2024-11-17 14:37:06.634674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.480 [2024-11-17 14:37:06.634689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-11-17 14:37:06.644620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.480 [2024-11-17 14:37:06.644679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.480 [2024-11-17 14:37:06.644693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.480 [2024-11-17 14:37:06.644701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.480 [2024-11-17 14:37:06.644707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.480 [2024-11-17 14:37:06.644722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-11-17 14:37:06.654654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.480 [2024-11-17 14:37:06.654710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.480 [2024-11-17 14:37:06.654723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.480 [2024-11-17 14:37:06.654731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.480 [2024-11-17 14:37:06.654737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.480 [2024-11-17 14:37:06.654753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-11-17 14:37:06.664677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.480 [2024-11-17 14:37:06.664735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.481 [2024-11-17 14:37:06.664749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.481 [2024-11-17 14:37:06.664756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.481 [2024-11-17 14:37:06.664763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.481 [2024-11-17 14:37:06.664778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-11-17 14:37:06.674644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.481 [2024-11-17 14:37:06.674703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.481 [2024-11-17 14:37:06.674717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.481 [2024-11-17 14:37:06.674725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.481 [2024-11-17 14:37:06.674732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.481 [2024-11-17 14:37:06.674747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-11-17 14:37:06.684739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.481 [2024-11-17 14:37:06.684813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.481 [2024-11-17 14:37:06.684827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.481 [2024-11-17 14:37:06.684834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.481 [2024-11-17 14:37:06.684840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.481 [2024-11-17 14:37:06.684856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-11-17 14:37:06.694787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.481 [2024-11-17 14:37:06.694858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.481 [2024-11-17 14:37:06.694872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.481 [2024-11-17 14:37:06.694879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.481 [2024-11-17 14:37:06.694886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.481 [2024-11-17 14:37:06.694900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-17 14:37:06.704727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.742 [2024-11-17 14:37:06.704787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.742 [2024-11-17 14:37:06.704802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.742 [2024-11-17 14:37:06.704809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.742 [2024-11-17 14:37:06.704816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.742 [2024-11-17 14:37:06.704831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-17 14:37:06.714821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.742 [2024-11-17 14:37:06.714879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.742 [2024-11-17 14:37:06.714896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.742 [2024-11-17 14:37:06.714904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.742 [2024-11-17 14:37:06.714910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.742 [2024-11-17 14:37:06.714925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-17 14:37:06.724836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.742 [2024-11-17 14:37:06.724891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.742 [2024-11-17 14:37:06.724905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.742 [2024-11-17 14:37:06.724912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.742 [2024-11-17 14:37:06.724919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.742 [2024-11-17 14:37:06.724934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-17 14:37:06.734932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.742 [2024-11-17 14:37:06.734990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.742 [2024-11-17 14:37:06.735004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.742 [2024-11-17 14:37:06.735011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.742 [2024-11-17 14:37:06.735018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.742 [2024-11-17 14:37:06.735033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-17 14:37:06.744904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.742 [2024-11-17 14:37:06.744961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.742 [2024-11-17 14:37:06.744975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.742 [2024-11-17 14:37:06.744982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.742 [2024-11-17 14:37:06.744989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.742 [2024-11-17 14:37:06.745005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-17 14:37:06.754858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.742 [2024-11-17 14:37:06.754910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.742 [2024-11-17 14:37:06.754923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.742 [2024-11-17 14:37:06.754938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.742 [2024-11-17 14:37:06.754945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.742 [2024-11-17 14:37:06.754960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-17 14:37:06.764956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.742 [2024-11-17 14:37:06.765015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.742 [2024-11-17 14:37:06.765028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.742 [2024-11-17 14:37:06.765036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.742 [2024-11-17 14:37:06.765042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.742 [2024-11-17 14:37:06.765057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-17 14:37:06.774993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.742 [2024-11-17 14:37:06.775046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.742 [2024-11-17 14:37:06.775059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.742 [2024-11-17 14:37:06.775067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.742 [2024-11-17 14:37:06.775072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.742 [2024-11-17 14:37:06.775087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-17 14:37:06.785055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.742 [2024-11-17 14:37:06.785121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.742 [2024-11-17 14:37:06.785135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.742 [2024-11-17 14:37:06.785142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.742 [2024-11-17 14:37:06.785149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.742 [2024-11-17 14:37:06.785164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-17 14:37:06.795076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.742 [2024-11-17 14:37:06.795139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.742 [2024-11-17 14:37:06.795154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.742 [2024-11-17 14:37:06.795161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.742 [2024-11-17 14:37:06.795167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.742 [2024-11-17 14:37:06.795188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-17 14:37:06.805066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.743 [2024-11-17 14:37:06.805122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.743 [2024-11-17 14:37:06.805136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.743 [2024-11-17 14:37:06.805144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.743 [2024-11-17 14:37:06.805151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.743 [2024-11-17 14:37:06.805166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-17 14:37:06.815097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.743 [2024-11-17 14:37:06.815154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.743 [2024-11-17 14:37:06.815169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.743 [2024-11-17 14:37:06.815175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.743 [2024-11-17 14:37:06.815182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.743 [2024-11-17 14:37:06.815197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-17 14:37:06.825135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.743 [2024-11-17 14:37:06.825192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.743 [2024-11-17 14:37:06.825206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.743 [2024-11-17 14:37:06.825213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.743 [2024-11-17 14:37:06.825219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.743 [2024-11-17 14:37:06.825234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-17 14:37:06.835084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.743 [2024-11-17 14:37:06.835141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.743 [2024-11-17 14:37:06.835155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.743 [2024-11-17 14:37:06.835162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.743 [2024-11-17 14:37:06.835169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.743 [2024-11-17 14:37:06.835184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-17 14:37:06.845212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.743 [2024-11-17 14:37:06.845274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.743 [2024-11-17 14:37:06.845288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.743 [2024-11-17 14:37:06.845296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.743 [2024-11-17 14:37:06.845302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.743 [2024-11-17 14:37:06.845317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-17 14:37:06.855252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.743 [2024-11-17 14:37:06.855309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.743 [2024-11-17 14:37:06.855322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.743 [2024-11-17 14:37:06.855330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.743 [2024-11-17 14:37:06.855337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.743 [2024-11-17 14:37:06.855357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-17 14:37:06.865224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.743 [2024-11-17 14:37:06.865323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.743 [2024-11-17 14:37:06.865339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.743 [2024-11-17 14:37:06.865347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.743 [2024-11-17 14:37:06.865357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.743 [2024-11-17 14:37:06.865374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-17 14:37:06.875264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.743 [2024-11-17 14:37:06.875314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.743 [2024-11-17 14:37:06.875328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.743 [2024-11-17 14:37:06.875334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.743 [2024-11-17 14:37:06.875341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.743 [2024-11-17 14:37:06.875361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-17 14:37:06.885292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.743 [2024-11-17 14:37:06.885375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.743 [2024-11-17 14:37:06.885391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.743 [2024-11-17 14:37:06.885401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.743 [2024-11-17 14:37:06.885409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.743 [2024-11-17 14:37:06.885424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-17 14:37:06.895329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.743 [2024-11-17 14:37:06.895394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.743 [2024-11-17 14:37:06.895409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.743 [2024-11-17 14:37:06.895416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.743 [2024-11-17 14:37:06.895422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.743 [2024-11-17 14:37:06.895438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-17 14:37:06.905345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.743 [2024-11-17 14:37:06.905410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.743 [2024-11-17 14:37:06.905424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.743 [2024-11-17 14:37:06.905431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.743 [2024-11-17 14:37:06.905438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.743 [2024-11-17 14:37:06.905453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-17 14:37:06.915392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.743 [2024-11-17 14:37:06.915449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.743 [2024-11-17 14:37:06.915462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.743 [2024-11-17 14:37:06.915470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.743 [2024-11-17 14:37:06.915477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.743 [2024-11-17 14:37:06.915491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-17 14:37:06.925430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.743 [2024-11-17 14:37:06.925479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.743 [2024-11-17 14:37:06.925493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.743 [2024-11-17 14:37:06.925499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.743 [2024-11-17 14:37:06.925506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.743 [2024-11-17 14:37:06.925525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-17 14:37:06.935453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.744 [2024-11-17 14:37:06.935508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.744 [2024-11-17 14:37:06.935521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.744 [2024-11-17 14:37:06.935529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.744 [2024-11-17 14:37:06.935535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.744 [2024-11-17 14:37:06.935551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-17 14:37:06.945484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.744 [2024-11-17 14:37:06.945538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.744 [2024-11-17 14:37:06.945551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.744 [2024-11-17 14:37:06.945558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.744 [2024-11-17 14:37:06.945565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.744 [2024-11-17 14:37:06.945580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-17 14:37:06.955449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.744 [2024-11-17 14:37:06.955511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.744 [2024-11-17 14:37:06.955525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.744 [2024-11-17 14:37:06.955532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.744 [2024-11-17 14:37:06.955539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:17.744 [2024-11-17 14:37:06.955554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.744 qpair failed and we were unable to recover it. 00:27:18.005 [2024-11-17 14:37:06.965495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.005 [2024-11-17 14:37:06.965555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.005 [2024-11-17 14:37:06.965569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.005 [2024-11-17 14:37:06.965577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.005 [2024-11-17 14:37:06.965584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.005 [2024-11-17 14:37:06.965600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.005 qpair failed and we were unable to recover it. 00:27:18.005 [2024-11-17 14:37:06.975614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.005 [2024-11-17 14:37:06.975679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.005 [2024-11-17 14:37:06.975693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.005 [2024-11-17 14:37:06.975701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.005 [2024-11-17 14:37:06.975707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.005 [2024-11-17 14:37:06.975723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.005 qpair failed and we were unable to recover it. 00:27:18.005 [2024-11-17 14:37:06.985590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.005 [2024-11-17 14:37:06.985702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.005 [2024-11-17 14:37:06.985715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.005 [2024-11-17 14:37:06.985722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.005 [2024-11-17 14:37:06.985729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.005 [2024-11-17 14:37:06.985744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.005 qpair failed and we were unable to recover it. 00:27:18.005 [2024-11-17 14:37:06.995646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.005 [2024-11-17 14:37:06.995709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.005 [2024-11-17 14:37:06.995724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.005 [2024-11-17 14:37:06.995732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.005 [2024-11-17 14:37:06.995739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.005 [2024-11-17 14:37:06.995755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.005 qpair failed and we were unable to recover it. 00:27:18.005 [2024-11-17 14:37:07.005560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.005 [2024-11-17 14:37:07.005615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.005 [2024-11-17 14:37:07.005628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.005 [2024-11-17 14:37:07.005635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.005 [2024-11-17 14:37:07.005642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.005 [2024-11-17 14:37:07.005657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.005 qpair failed and we were unable to recover it. 00:27:18.005 [2024-11-17 14:37:07.015670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.005 [2024-11-17 14:37:07.015726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.005 [2024-11-17 14:37:07.015743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.005 [2024-11-17 14:37:07.015751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.005 [2024-11-17 14:37:07.015757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.005 [2024-11-17 14:37:07.015773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.005 qpair failed and we were unable to recover it. 00:27:18.005 [2024-11-17 14:37:07.025697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.005 [2024-11-17 14:37:07.025750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.005 [2024-11-17 14:37:07.025764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.005 [2024-11-17 14:37:07.025771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.005 [2024-11-17 14:37:07.025777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.005 [2024-11-17 14:37:07.025793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.005 qpair failed and we were unable to recover it. 00:27:18.005 [2024-11-17 14:37:07.035728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.006 [2024-11-17 14:37:07.035783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.006 [2024-11-17 14:37:07.035796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.006 [2024-11-17 14:37:07.035803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.006 [2024-11-17 14:37:07.035810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.006 [2024-11-17 14:37:07.035825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.006 qpair failed and we were unable to recover it. 00:27:18.006 [2024-11-17 14:37:07.045750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.006 [2024-11-17 14:37:07.045801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.006 [2024-11-17 14:37:07.045817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.006 [2024-11-17 14:37:07.045824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.006 [2024-11-17 14:37:07.045832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.006 [2024-11-17 14:37:07.045847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.006 qpair failed and we were unable to recover it. 00:27:18.006 [2024-11-17 14:37:07.055790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.006 [2024-11-17 14:37:07.055845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.006 [2024-11-17 14:37:07.055858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.006 [2024-11-17 14:37:07.055865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.006 [2024-11-17 14:37:07.055875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.006 [2024-11-17 14:37:07.055890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.006 qpair failed and we were unable to recover it. 00:27:18.006 [2024-11-17 14:37:07.065746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.006 [2024-11-17 14:37:07.065848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.006 [2024-11-17 14:37:07.065861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.006 [2024-11-17 14:37:07.065868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.006 [2024-11-17 14:37:07.065874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.006 [2024-11-17 14:37:07.065889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.006 qpair failed and we were unable to recover it. 00:27:18.006 [2024-11-17 14:37:07.075831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.006 [2024-11-17 14:37:07.075883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.006 [2024-11-17 14:37:07.075896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.006 [2024-11-17 14:37:07.075903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.006 [2024-11-17 14:37:07.075910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.006 [2024-11-17 14:37:07.075925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.006 qpair failed and we were unable to recover it. 00:27:18.006 [2024-11-17 14:37:07.085864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.006 [2024-11-17 14:37:07.085918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.006 [2024-11-17 14:37:07.085931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.006 [2024-11-17 14:37:07.085939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.006 [2024-11-17 14:37:07.085945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.006 [2024-11-17 14:37:07.085960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.006 qpair failed and we were unable to recover it. 00:27:18.006 [2024-11-17 14:37:07.095834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.006 [2024-11-17 14:37:07.095902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.006 [2024-11-17 14:37:07.095916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.006 [2024-11-17 14:37:07.095924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.006 [2024-11-17 14:37:07.095930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.006 [2024-11-17 14:37:07.095945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.006 qpair failed and we were unable to recover it. 00:27:18.006 [2024-11-17 14:37:07.105929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.006 [2024-11-17 14:37:07.105984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.006 [2024-11-17 14:37:07.105997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.006 [2024-11-17 14:37:07.106004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.006 [2024-11-17 14:37:07.106010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.006 [2024-11-17 14:37:07.106025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.006 qpair failed and we were unable to recover it. 00:27:18.006 [2024-11-17 14:37:07.115954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.006 [2024-11-17 14:37:07.116011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.006 [2024-11-17 14:37:07.116025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.006 [2024-11-17 14:37:07.116032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.006 [2024-11-17 14:37:07.116038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.006 [2024-11-17 14:37:07.116053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.006 qpair failed and we were unable to recover it. 00:27:18.006 [2024-11-17 14:37:07.126043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.006 [2024-11-17 14:37:07.126143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.006 [2024-11-17 14:37:07.126157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.006 [2024-11-17 14:37:07.126163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.006 [2024-11-17 14:37:07.126170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.006 [2024-11-17 14:37:07.126186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.006 qpair failed and we were unable to recover it. 00:27:18.006 [2024-11-17 14:37:07.136012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.006 [2024-11-17 14:37:07.136065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.006 [2024-11-17 14:37:07.136078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.006 [2024-11-17 14:37:07.136086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.006 [2024-11-17 14:37:07.136092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.006 [2024-11-17 14:37:07.136107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.006 qpair failed and we were unable to recover it. 00:27:18.006 [2024-11-17 14:37:07.146042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.006 [2024-11-17 14:37:07.146108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.006 [2024-11-17 14:37:07.146126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.006 [2024-11-17 14:37:07.146133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.006 [2024-11-17 14:37:07.146139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.006 [2024-11-17 14:37:07.146154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.006 qpair failed and we were unable to recover it. 00:27:18.006 [2024-11-17 14:37:07.155996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.006 [2024-11-17 14:37:07.156080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.006 [2024-11-17 14:37:07.156093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.006 [2024-11-17 14:37:07.156100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.006 [2024-11-17 14:37:07.156106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.006 [2024-11-17 14:37:07.156120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.006 qpair failed and we were unable to recover it. 00:27:18.006 [2024-11-17 14:37:07.166091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.007 [2024-11-17 14:37:07.166147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.007 [2024-11-17 14:37:07.166160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.007 [2024-11-17 14:37:07.166168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.007 [2024-11-17 14:37:07.166174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.007 [2024-11-17 14:37:07.166189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.007 qpair failed and we were unable to recover it. 00:27:18.007 [2024-11-17 14:37:07.176059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.007 [2024-11-17 14:37:07.176114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.007 [2024-11-17 14:37:07.176128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.007 [2024-11-17 14:37:07.176135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.007 [2024-11-17 14:37:07.176141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.007 [2024-11-17 14:37:07.176156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.007 qpair failed and we were unable to recover it. 00:27:18.007 [2024-11-17 14:37:07.186145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.007 [2024-11-17 14:37:07.186222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.007 [2024-11-17 14:37:07.186236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.007 [2024-11-17 14:37:07.186243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.007 [2024-11-17 14:37:07.186253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.007 [2024-11-17 14:37:07.186268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.007 qpair failed and we were unable to recover it. 00:27:18.007 [2024-11-17 14:37:07.196277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.007 [2024-11-17 14:37:07.196387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.007 [2024-11-17 14:37:07.196401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.007 [2024-11-17 14:37:07.196408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.007 [2024-11-17 14:37:07.196414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.007 [2024-11-17 14:37:07.196430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.007 qpair failed and we were unable to recover it. 00:27:18.007 [2024-11-17 14:37:07.206242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.007 [2024-11-17 14:37:07.206290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.007 [2024-11-17 14:37:07.206306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.007 [2024-11-17 14:37:07.206313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.007 [2024-11-17 14:37:07.206319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.007 [2024-11-17 14:37:07.206334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.007 qpair failed and we were unable to recover it. 00:27:18.007 [2024-11-17 14:37:07.216284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.007 [2024-11-17 14:37:07.216345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.007 [2024-11-17 14:37:07.216363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.007 [2024-11-17 14:37:07.216370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.007 [2024-11-17 14:37:07.216376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.007 [2024-11-17 14:37:07.216392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.007 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-17 14:37:07.226343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.356 [2024-11-17 14:37:07.226403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.356 [2024-11-17 14:37:07.226417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.356 [2024-11-17 14:37:07.226424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.356 [2024-11-17 14:37:07.226431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.356 [2024-11-17 14:37:07.226446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-17 14:37:07.236317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.356 [2024-11-17 14:37:07.236387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.356 [2024-11-17 14:37:07.236402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.356 [2024-11-17 14:37:07.236410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.356 [2024-11-17 14:37:07.236416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.356 [2024-11-17 14:37:07.236432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-17 14:37:07.246300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.356 [2024-11-17 14:37:07.246371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.356 [2024-11-17 14:37:07.246387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.356 [2024-11-17 14:37:07.246394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.356 [2024-11-17 14:37:07.246400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.356 [2024-11-17 14:37:07.246416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-17 14:37:07.256375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.356 [2024-11-17 14:37:07.256434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.356 [2024-11-17 14:37:07.256448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.356 [2024-11-17 14:37:07.256455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.356 [2024-11-17 14:37:07.256462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.356 [2024-11-17 14:37:07.256478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-17 14:37:07.266405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.356 [2024-11-17 14:37:07.266474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.356 [2024-11-17 14:37:07.266488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.356 [2024-11-17 14:37:07.266495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.356 [2024-11-17 14:37:07.266502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.356 [2024-11-17 14:37:07.266517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.356 qpair failed and we were unable to recover it. 00:27:18.356 [2024-11-17 14:37:07.276341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.356 [2024-11-17 14:37:07.276400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.356 [2024-11-17 14:37:07.276418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.357 [2024-11-17 14:37:07.276425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.357 [2024-11-17 14:37:07.276432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.357 [2024-11-17 14:37:07.276448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-17 14:37:07.286432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.357 [2024-11-17 14:37:07.286493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.357 [2024-11-17 14:37:07.286518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.357 [2024-11-17 14:37:07.286526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.357 [2024-11-17 14:37:07.286532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.357 [2024-11-17 14:37:07.286553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-17 14:37:07.296481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.357 [2024-11-17 14:37:07.296535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.357 [2024-11-17 14:37:07.296549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.357 [2024-11-17 14:37:07.296556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.357 [2024-11-17 14:37:07.296563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.357 [2024-11-17 14:37:07.296579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-17 14:37:07.306485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.357 [2024-11-17 14:37:07.306551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.357 [2024-11-17 14:37:07.306564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.357 [2024-11-17 14:37:07.306572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.357 [2024-11-17 14:37:07.306578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.357 [2024-11-17 14:37:07.306594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-17 14:37:07.316532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.357 [2024-11-17 14:37:07.316582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.357 [2024-11-17 14:37:07.316596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.357 [2024-11-17 14:37:07.316606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.357 [2024-11-17 14:37:07.316612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.357 [2024-11-17 14:37:07.316628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-17 14:37:07.326584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.357 [2024-11-17 14:37:07.326638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.357 [2024-11-17 14:37:07.326653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.357 [2024-11-17 14:37:07.326660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.357 [2024-11-17 14:37:07.326667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.357 [2024-11-17 14:37:07.326682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-17 14:37:07.336600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.357 [2024-11-17 14:37:07.336665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.357 [2024-11-17 14:37:07.336679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.357 [2024-11-17 14:37:07.336687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.357 [2024-11-17 14:37:07.336693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.357 [2024-11-17 14:37:07.336708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-17 14:37:07.346620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.357 [2024-11-17 14:37:07.346677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.357 [2024-11-17 14:37:07.346691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.357 [2024-11-17 14:37:07.346698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.357 [2024-11-17 14:37:07.346704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.357 [2024-11-17 14:37:07.346720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-17 14:37:07.356575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.357 [2024-11-17 14:37:07.356640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.357 [2024-11-17 14:37:07.356654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.357 [2024-11-17 14:37:07.356661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.357 [2024-11-17 14:37:07.356668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.357 [2024-11-17 14:37:07.356686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-17 14:37:07.366659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.357 [2024-11-17 14:37:07.366726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.357 [2024-11-17 14:37:07.366740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.357 [2024-11-17 14:37:07.366747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.357 [2024-11-17 14:37:07.366753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.357 [2024-11-17 14:37:07.366768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-17 14:37:07.376705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.357 [2024-11-17 14:37:07.376764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.357 [2024-11-17 14:37:07.376778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.357 [2024-11-17 14:37:07.376785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.357 [2024-11-17 14:37:07.376792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.357 [2024-11-17 14:37:07.376808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-17 14:37:07.386727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.357 [2024-11-17 14:37:07.386782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.357 [2024-11-17 14:37:07.386796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.357 [2024-11-17 14:37:07.386803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.357 [2024-11-17 14:37:07.386810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.357 [2024-11-17 14:37:07.386825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-17 14:37:07.396746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.357 [2024-11-17 14:37:07.396802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.357 [2024-11-17 14:37:07.396816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.357 [2024-11-17 14:37:07.396824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.357 [2024-11-17 14:37:07.396830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.357 [2024-11-17 14:37:07.396845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-17 14:37:07.406786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.357 [2024-11-17 14:37:07.406848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.357 [2024-11-17 14:37:07.406862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.357 [2024-11-17 14:37:07.406869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.357 [2024-11-17 14:37:07.406875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.357 [2024-11-17 14:37:07.406891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-17 14:37:07.416812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.357 [2024-11-17 14:37:07.416919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.357 [2024-11-17 14:37:07.416933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.357 [2024-11-17 14:37:07.416940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.357 [2024-11-17 14:37:07.416947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.357 [2024-11-17 14:37:07.416962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.357 qpair failed and we were unable to recover it. 00:27:18.357 [2024-11-17 14:37:07.426840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.357 [2024-11-17 14:37:07.426897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.357 [2024-11-17 14:37:07.426911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.357 [2024-11-17 14:37:07.426919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.358 [2024-11-17 14:37:07.426926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.358 [2024-11-17 14:37:07.426941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-17 14:37:07.436799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.358 [2024-11-17 14:37:07.436859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.358 [2024-11-17 14:37:07.436874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.358 [2024-11-17 14:37:07.436881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.358 [2024-11-17 14:37:07.436888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.358 [2024-11-17 14:37:07.436903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-17 14:37:07.446888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.358 [2024-11-17 14:37:07.446941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.358 [2024-11-17 14:37:07.446955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.358 [2024-11-17 14:37:07.446966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.358 [2024-11-17 14:37:07.446972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.358 [2024-11-17 14:37:07.446987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-17 14:37:07.456933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.358 [2024-11-17 14:37:07.456993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.358 [2024-11-17 14:37:07.457008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.358 [2024-11-17 14:37:07.457016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.358 [2024-11-17 14:37:07.457023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.358 [2024-11-17 14:37:07.457038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-17 14:37:07.466964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.358 [2024-11-17 14:37:07.467019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.358 [2024-11-17 14:37:07.467033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.358 [2024-11-17 14:37:07.467040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.358 [2024-11-17 14:37:07.467046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.358 [2024-11-17 14:37:07.467062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-17 14:37:07.476980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.358 [2024-11-17 14:37:07.477034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.358 [2024-11-17 14:37:07.477048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.358 [2024-11-17 14:37:07.477055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.358 [2024-11-17 14:37:07.477061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.358 [2024-11-17 14:37:07.477076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-17 14:37:07.487003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.358 [2024-11-17 14:37:07.487062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.358 [2024-11-17 14:37:07.487077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.358 [2024-11-17 14:37:07.487086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.358 [2024-11-17 14:37:07.487093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.358 [2024-11-17 14:37:07.487114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-17 14:37:07.497057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.358 [2024-11-17 14:37:07.497124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.358 [2024-11-17 14:37:07.497139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.358 [2024-11-17 14:37:07.497146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.358 [2024-11-17 14:37:07.497152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.358 [2024-11-17 14:37:07.497166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-17 14:37:07.507066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.358 [2024-11-17 14:37:07.507123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.358 [2024-11-17 14:37:07.507137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.358 [2024-11-17 14:37:07.507144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.358 [2024-11-17 14:37:07.507151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.358 [2024-11-17 14:37:07.507166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-17 14:37:07.517095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.358 [2024-11-17 14:37:07.517146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.358 [2024-11-17 14:37:07.517160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.358 [2024-11-17 14:37:07.517167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.358 [2024-11-17 14:37:07.517173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.358 [2024-11-17 14:37:07.517189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-17 14:37:07.527131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.358 [2024-11-17 14:37:07.527182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.358 [2024-11-17 14:37:07.527195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.358 [2024-11-17 14:37:07.527202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.358 [2024-11-17 14:37:07.527209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.358 [2024-11-17 14:37:07.527224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.358 [2024-11-17 14:37:07.537170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.358 [2024-11-17 14:37:07.537229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.358 [2024-11-17 14:37:07.537243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.358 [2024-11-17 14:37:07.537250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.358 [2024-11-17 14:37:07.537257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.358 [2024-11-17 14:37:07.537273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.358 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-17 14:37:07.547187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.640 [2024-11-17 14:37:07.547239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.640 [2024-11-17 14:37:07.547252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.640 [2024-11-17 14:37:07.547260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.640 [2024-11-17 14:37:07.547267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.640 [2024-11-17 14:37:07.547282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-17 14:37:07.557209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.640 [2024-11-17 14:37:07.557266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.640 [2024-11-17 14:37:07.557280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.640 [2024-11-17 14:37:07.557287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.640 [2024-11-17 14:37:07.557293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.640 [2024-11-17 14:37:07.557309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-17 14:37:07.567254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.640 [2024-11-17 14:37:07.567310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.640 [2024-11-17 14:37:07.567323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.640 [2024-11-17 14:37:07.567330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.640 [2024-11-17 14:37:07.567337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.640 [2024-11-17 14:37:07.567357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-17 14:37:07.577288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.640 [2024-11-17 14:37:07.577345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.640 [2024-11-17 14:37:07.577366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.640 [2024-11-17 14:37:07.577373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.640 [2024-11-17 14:37:07.577380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.640 [2024-11-17 14:37:07.577395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.640 qpair failed and we were unable to recover it. 00:27:18.640 [2024-11-17 14:37:07.587325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.640 [2024-11-17 14:37:07.587390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.640 [2024-11-17 14:37:07.587409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.640 [2024-11-17 14:37:07.587417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.640 [2024-11-17 14:37:07.587425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.641 [2024-11-17 14:37:07.587444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-17 14:37:07.597275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.641 [2024-11-17 14:37:07.597329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.641 [2024-11-17 14:37:07.597344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.641 [2024-11-17 14:37:07.597355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.641 [2024-11-17 14:37:07.597363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.641 [2024-11-17 14:37:07.597379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-17 14:37:07.607275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.641 [2024-11-17 14:37:07.607342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.641 [2024-11-17 14:37:07.607361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.641 [2024-11-17 14:37:07.607369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.641 [2024-11-17 14:37:07.607375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.641 [2024-11-17 14:37:07.607391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-17 14:37:07.617388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.641 [2024-11-17 14:37:07.617446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.641 [2024-11-17 14:37:07.617460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.641 [2024-11-17 14:37:07.617467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.641 [2024-11-17 14:37:07.617476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.641 [2024-11-17 14:37:07.617493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-17 14:37:07.627439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.641 [2024-11-17 14:37:07.627495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.641 [2024-11-17 14:37:07.627508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.641 [2024-11-17 14:37:07.627515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.641 [2024-11-17 14:37:07.627522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.641 [2024-11-17 14:37:07.627538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-17 14:37:07.637364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.641 [2024-11-17 14:37:07.637414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.641 [2024-11-17 14:37:07.637428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.641 [2024-11-17 14:37:07.637435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.641 [2024-11-17 14:37:07.637442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.641 [2024-11-17 14:37:07.637459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-17 14:37:07.647458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.641 [2024-11-17 14:37:07.647514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.641 [2024-11-17 14:37:07.647529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.641 [2024-11-17 14:37:07.647536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.641 [2024-11-17 14:37:07.647543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.641 [2024-11-17 14:37:07.647558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-17 14:37:07.657512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.641 [2024-11-17 14:37:07.657612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.641 [2024-11-17 14:37:07.657626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.641 [2024-11-17 14:37:07.657633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.641 [2024-11-17 14:37:07.657640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.641 [2024-11-17 14:37:07.657656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-17 14:37:07.667444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.641 [2024-11-17 14:37:07.667500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.641 [2024-11-17 14:37:07.667514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.641 [2024-11-17 14:37:07.667522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.641 [2024-11-17 14:37:07.667529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.641 [2024-11-17 14:37:07.667544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-17 14:37:07.677562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.641 [2024-11-17 14:37:07.677619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.641 [2024-11-17 14:37:07.677633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.641 [2024-11-17 14:37:07.677640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.641 [2024-11-17 14:37:07.677647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.641 [2024-11-17 14:37:07.677662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-17 14:37:07.687579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.641 [2024-11-17 14:37:07.687634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.641 [2024-11-17 14:37:07.687648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.641 [2024-11-17 14:37:07.687655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.641 [2024-11-17 14:37:07.687661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.641 [2024-11-17 14:37:07.687676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-17 14:37:07.697623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.641 [2024-11-17 14:37:07.697675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.641 [2024-11-17 14:37:07.697689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.641 [2024-11-17 14:37:07.697696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.641 [2024-11-17 14:37:07.697703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.641 [2024-11-17 14:37:07.697718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-17 14:37:07.707681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.641 [2024-11-17 14:37:07.707773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.641 [2024-11-17 14:37:07.707790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.641 [2024-11-17 14:37:07.707797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.641 [2024-11-17 14:37:07.707803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.641 [2024-11-17 14:37:07.707818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.641 qpair failed and we were unable to recover it. 00:27:18.641 [2024-11-17 14:37:07.717646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.641 [2024-11-17 14:37:07.717704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.641 [2024-11-17 14:37:07.717718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.641 [2024-11-17 14:37:07.717725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.641 [2024-11-17 14:37:07.717732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.642 [2024-11-17 14:37:07.717747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-17 14:37:07.727628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.642 [2024-11-17 14:37:07.727719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.642 [2024-11-17 14:37:07.727734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.642 [2024-11-17 14:37:07.727742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.642 [2024-11-17 14:37:07.727749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.642 [2024-11-17 14:37:07.727767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-17 14:37:07.737728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.642 [2024-11-17 14:37:07.737785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.642 [2024-11-17 14:37:07.737800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.642 [2024-11-17 14:37:07.737808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.642 [2024-11-17 14:37:07.737817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.642 [2024-11-17 14:37:07.737833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-17 14:37:07.747755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.642 [2024-11-17 14:37:07.747810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.642 [2024-11-17 14:37:07.747825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.642 [2024-11-17 14:37:07.747832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.642 [2024-11-17 14:37:07.747841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.642 [2024-11-17 14:37:07.747856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-17 14:37:07.757720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.642 [2024-11-17 14:37:07.757773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.642 [2024-11-17 14:37:07.757786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.642 [2024-11-17 14:37:07.757794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.642 [2024-11-17 14:37:07.757801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.642 [2024-11-17 14:37:07.757817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-17 14:37:07.767813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.642 [2024-11-17 14:37:07.767865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.642 [2024-11-17 14:37:07.767878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.642 [2024-11-17 14:37:07.767886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.642 [2024-11-17 14:37:07.767892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.642 [2024-11-17 14:37:07.767907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-17 14:37:07.777822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.642 [2024-11-17 14:37:07.777879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.642 [2024-11-17 14:37:07.777893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.642 [2024-11-17 14:37:07.777899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.642 [2024-11-17 14:37:07.777906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.642 [2024-11-17 14:37:07.777921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-17 14:37:07.787851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.642 [2024-11-17 14:37:07.787905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.642 [2024-11-17 14:37:07.787919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.642 [2024-11-17 14:37:07.787926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.642 [2024-11-17 14:37:07.787932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.642 [2024-11-17 14:37:07.787948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-17 14:37:07.797896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.642 [2024-11-17 14:37:07.797949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.642 [2024-11-17 14:37:07.797963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.642 [2024-11-17 14:37:07.797970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.642 [2024-11-17 14:37:07.797976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.642 [2024-11-17 14:37:07.797992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-17 14:37:07.807944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.642 [2024-11-17 14:37:07.807999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.642 [2024-11-17 14:37:07.808012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.642 [2024-11-17 14:37:07.808019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.642 [2024-11-17 14:37:07.808027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.642 [2024-11-17 14:37:07.808041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-17 14:37:07.817930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.642 [2024-11-17 14:37:07.817982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.642 [2024-11-17 14:37:07.817996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.642 [2024-11-17 14:37:07.818003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.642 [2024-11-17 14:37:07.818009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.642 [2024-11-17 14:37:07.818025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-17 14:37:07.827904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.642 [2024-11-17 14:37:07.827981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.642 [2024-11-17 14:37:07.827994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.642 [2024-11-17 14:37:07.828002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.642 [2024-11-17 14:37:07.828008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.642 [2024-11-17 14:37:07.828024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-17 14:37:07.838002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.642 [2024-11-17 14:37:07.838092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.642 [2024-11-17 14:37:07.838108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.642 [2024-11-17 14:37:07.838115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.642 [2024-11-17 14:37:07.838121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.642 [2024-11-17 14:37:07.838136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.642 qpair failed and we were unable to recover it. 00:27:18.642 [2024-11-17 14:37:07.847960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.642 [2024-11-17 14:37:07.848026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.642 [2024-11-17 14:37:07.848040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.642 [2024-11-17 14:37:07.848047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.643 [2024-11-17 14:37:07.848054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.643 [2024-11-17 14:37:07.848069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.643 [2024-11-17 14:37:07.858067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.643 [2024-11-17 14:37:07.858121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.643 [2024-11-17 14:37:07.858134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.643 [2024-11-17 14:37:07.858141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.643 [2024-11-17 14:37:07.858148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.643 [2024-11-17 14:37:07.858162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.643 qpair failed and we were unable to recover it. 00:27:18.904 [2024-11-17 14:37:07.868083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.904 [2024-11-17 14:37:07.868142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.904 [2024-11-17 14:37:07.868156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.904 [2024-11-17 14:37:07.868164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.904 [2024-11-17 14:37:07.868171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.904 [2024-11-17 14:37:07.868187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.904 qpair failed and we were unable to recover it. 00:27:18.904 [2024-11-17 14:37:07.878103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.904 [2024-11-17 14:37:07.878156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.904 [2024-11-17 14:37:07.878170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.904 [2024-11-17 14:37:07.878181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.904 [2024-11-17 14:37:07.878187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.904 [2024-11-17 14:37:07.878202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.904 qpair failed and we were unable to recover it. 00:27:18.904 [2024-11-17 14:37:07.888082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.904 [2024-11-17 14:37:07.888136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.904 [2024-11-17 14:37:07.888151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.904 [2024-11-17 14:37:07.888159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.904 [2024-11-17 14:37:07.888165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.904 [2024-11-17 14:37:07.888180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.904 qpair failed and we were unable to recover it. 00:27:18.904 [2024-11-17 14:37:07.898167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.904 [2024-11-17 14:37:07.898221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.904 [2024-11-17 14:37:07.898235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.904 [2024-11-17 14:37:07.898242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.904 [2024-11-17 14:37:07.898249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.904 [2024-11-17 14:37:07.898264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.904 qpair failed and we were unable to recover it. 00:27:18.904 [2024-11-17 14:37:07.908144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.904 [2024-11-17 14:37:07.908198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.904 [2024-11-17 14:37:07.908212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.904 [2024-11-17 14:37:07.908218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.904 [2024-11-17 14:37:07.908225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.904 [2024-11-17 14:37:07.908240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.904 qpair failed and we were unable to recover it. 00:27:18.904 [2024-11-17 14:37:07.918210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.904 [2024-11-17 14:37:07.918281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.904 [2024-11-17 14:37:07.918296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.904 [2024-11-17 14:37:07.918303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.904 [2024-11-17 14:37:07.918309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.904 [2024-11-17 14:37:07.918328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.904 qpair failed and we were unable to recover it. 00:27:18.904 [2024-11-17 14:37:07.928259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.904 [2024-11-17 14:37:07.928361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.904 [2024-11-17 14:37:07.928375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.904 [2024-11-17 14:37:07.928383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.904 [2024-11-17 14:37:07.928389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.904 [2024-11-17 14:37:07.928405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.904 qpair failed and we were unable to recover it. 00:27:18.904 [2024-11-17 14:37:07.938291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.904 [2024-11-17 14:37:07.938345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.904 [2024-11-17 14:37:07.938363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.904 [2024-11-17 14:37:07.938370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.905 [2024-11-17 14:37:07.938377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.905 [2024-11-17 14:37:07.938392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.905 qpair failed and we were unable to recover it. 00:27:18.905 [2024-11-17 14:37:07.948278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.905 [2024-11-17 14:37:07.948331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.905 [2024-11-17 14:37:07.948345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.905 [2024-11-17 14:37:07.948356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.905 [2024-11-17 14:37:07.948363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.905 [2024-11-17 14:37:07.948378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.905 qpair failed and we were unable to recover it. 00:27:18.905 [2024-11-17 14:37:07.958285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.905 [2024-11-17 14:37:07.958339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.905 [2024-11-17 14:37:07.958356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.905 [2024-11-17 14:37:07.958364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.905 [2024-11-17 14:37:07.958370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.905 [2024-11-17 14:37:07.958385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.905 qpair failed and we were unable to recover it. 00:27:18.905 [2024-11-17 14:37:07.968378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.905 [2024-11-17 14:37:07.968431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.905 [2024-11-17 14:37:07.968445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.905 [2024-11-17 14:37:07.968452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.905 [2024-11-17 14:37:07.968459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.905 [2024-11-17 14:37:07.968474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.905 qpair failed and we were unable to recover it. 00:27:18.905 [2024-11-17 14:37:07.978410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.905 [2024-11-17 14:37:07.978483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.905 [2024-11-17 14:37:07.978497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.905 [2024-11-17 14:37:07.978504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.905 [2024-11-17 14:37:07.978510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.905 [2024-11-17 14:37:07.978525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.905 qpair failed and we were unable to recover it. 00:27:18.905 [2024-11-17 14:37:07.988431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.905 [2024-11-17 14:37:07.988491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.905 [2024-11-17 14:37:07.988504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.905 [2024-11-17 14:37:07.988512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.905 [2024-11-17 14:37:07.988518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.905 [2024-11-17 14:37:07.988533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.905 qpair failed and we were unable to recover it. 00:27:18.905 [2024-11-17 14:37:07.998463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.905 [2024-11-17 14:37:07.998518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.905 [2024-11-17 14:37:07.998532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.905 [2024-11-17 14:37:07.998539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.905 [2024-11-17 14:37:07.998545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.905 [2024-11-17 14:37:07.998560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.905 qpair failed and we were unable to recover it. 00:27:18.905 [2024-11-17 14:37:08.008516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.905 [2024-11-17 14:37:08.008572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.905 [2024-11-17 14:37:08.008586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.905 [2024-11-17 14:37:08.008597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.905 [2024-11-17 14:37:08.008603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.905 [2024-11-17 14:37:08.008618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.905 qpair failed and we were unable to recover it. 00:27:18.905 [2024-11-17 14:37:08.018485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.905 [2024-11-17 14:37:08.018540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.905 [2024-11-17 14:37:08.018555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.905 [2024-11-17 14:37:08.018561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.905 [2024-11-17 14:37:08.018569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.905 [2024-11-17 14:37:08.018585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.905 qpair failed and we were unable to recover it. 00:27:18.905 [2024-11-17 14:37:08.028561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.905 [2024-11-17 14:37:08.028649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.905 [2024-11-17 14:37:08.028662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.905 [2024-11-17 14:37:08.028669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.905 [2024-11-17 14:37:08.028675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.905 [2024-11-17 14:37:08.028691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.905 qpair failed and we were unable to recover it. 00:27:18.905 [2024-11-17 14:37:08.038562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.905 [2024-11-17 14:37:08.038617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.905 [2024-11-17 14:37:08.038631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.905 [2024-11-17 14:37:08.038638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.905 [2024-11-17 14:37:08.038645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.905 [2024-11-17 14:37:08.038660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.905 qpair failed and we were unable to recover it. 00:27:18.905 [2024-11-17 14:37:08.048594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.905 [2024-11-17 14:37:08.048672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.905 [2024-11-17 14:37:08.048688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.905 [2024-11-17 14:37:08.048695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.905 [2024-11-17 14:37:08.048702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.905 [2024-11-17 14:37:08.048721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.905 qpair failed and we were unable to recover it. 00:27:18.905 [2024-11-17 14:37:08.058635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.905 [2024-11-17 14:37:08.058693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.905 [2024-11-17 14:37:08.058706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.905 [2024-11-17 14:37:08.058714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.905 [2024-11-17 14:37:08.058720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.905 [2024-11-17 14:37:08.058735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.905 qpair failed and we were unable to recover it. 00:27:18.905 [2024-11-17 14:37:08.068616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.905 [2024-11-17 14:37:08.068672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.905 [2024-11-17 14:37:08.068686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.905 [2024-11-17 14:37:08.068693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.906 [2024-11-17 14:37:08.068700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.906 [2024-11-17 14:37:08.068716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.906 qpair failed and we were unable to recover it. 00:27:18.906 [2024-11-17 14:37:08.078653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.906 [2024-11-17 14:37:08.078732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.906 [2024-11-17 14:37:08.078746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.906 [2024-11-17 14:37:08.078753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.906 [2024-11-17 14:37:08.078759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.906 [2024-11-17 14:37:08.078775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.906 qpair failed and we were unable to recover it. 00:27:18.906 [2024-11-17 14:37:08.088701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.906 [2024-11-17 14:37:08.088753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.906 [2024-11-17 14:37:08.088767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.906 [2024-11-17 14:37:08.088774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.906 [2024-11-17 14:37:08.088781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.906 [2024-11-17 14:37:08.088796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.906 qpair failed and we were unable to recover it. 00:27:18.906 [2024-11-17 14:37:08.098692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.906 [2024-11-17 14:37:08.098747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.906 [2024-11-17 14:37:08.098761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.906 [2024-11-17 14:37:08.098769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.906 [2024-11-17 14:37:08.098775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.906 [2024-11-17 14:37:08.098791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.906 qpair failed and we were unable to recover it. 00:27:18.906 [2024-11-17 14:37:08.108759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.906 [2024-11-17 14:37:08.108818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.906 [2024-11-17 14:37:08.108831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.906 [2024-11-17 14:37:08.108838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.906 [2024-11-17 14:37:08.108844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.906 [2024-11-17 14:37:08.108860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.906 qpair failed and we were unable to recover it. 00:27:18.906 [2024-11-17 14:37:08.118782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.906 [2024-11-17 14:37:08.118844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.906 [2024-11-17 14:37:08.118858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.906 [2024-11-17 14:37:08.118865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.906 [2024-11-17 14:37:08.118871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:18.906 [2024-11-17 14:37:08.118886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.906 qpair failed and we were unable to recover it. 00:27:19.166 [2024-11-17 14:37:08.128802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.166 [2024-11-17 14:37:08.128894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.166 [2024-11-17 14:37:08.128908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.166 [2024-11-17 14:37:08.128915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.166 [2024-11-17 14:37:08.128921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.166 [2024-11-17 14:37:08.128938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.166 qpair failed and we were unable to recover it. 00:27:19.166 [2024-11-17 14:37:08.138863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.166 [2024-11-17 14:37:08.138919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.166 [2024-11-17 14:37:08.138937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.166 [2024-11-17 14:37:08.138944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.166 [2024-11-17 14:37:08.138951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.166 [2024-11-17 14:37:08.138966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.166 qpair failed and we were unable to recover it. 00:27:19.166 [2024-11-17 14:37:08.148897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.166 [2024-11-17 14:37:08.148952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.167 [2024-11-17 14:37:08.148966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.167 [2024-11-17 14:37:08.148973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.167 [2024-11-17 14:37:08.148979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.167 [2024-11-17 14:37:08.148994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.167 qpair failed and we were unable to recover it. 00:27:19.167 [2024-11-17 14:37:08.158930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.167 [2024-11-17 14:37:08.158978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.167 [2024-11-17 14:37:08.158992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.167 [2024-11-17 14:37:08.158999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.167 [2024-11-17 14:37:08.159005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.167 [2024-11-17 14:37:08.159021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.167 qpair failed and we were unable to recover it. 00:27:19.167 [2024-11-17 14:37:08.168968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.167 [2024-11-17 14:37:08.169019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.167 [2024-11-17 14:37:08.169033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.167 [2024-11-17 14:37:08.169040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.167 [2024-11-17 14:37:08.169046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.167 [2024-11-17 14:37:08.169061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.167 qpair failed and we were unable to recover it. 00:27:19.167 [2024-11-17 14:37:08.178929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.167 [2024-11-17 14:37:08.178986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.167 [2024-11-17 14:37:08.179000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.167 [2024-11-17 14:37:08.179007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.167 [2024-11-17 14:37:08.179020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.167 [2024-11-17 14:37:08.179035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.167 qpair failed and we were unable to recover it. 00:27:19.167 [2024-11-17 14:37:08.189038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.167 [2024-11-17 14:37:08.189093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.167 [2024-11-17 14:37:08.189107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.167 [2024-11-17 14:37:08.189114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.167 [2024-11-17 14:37:08.189120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.167 [2024-11-17 14:37:08.189135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.167 qpair failed and we were unable to recover it. 00:27:19.167 [2024-11-17 14:37:08.199049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.167 [2024-11-17 14:37:08.199143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.167 [2024-11-17 14:37:08.199158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.167 [2024-11-17 14:37:08.199166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.167 [2024-11-17 14:37:08.199172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.167 [2024-11-17 14:37:08.199187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.167 qpair failed and we were unable to recover it. 00:27:19.167 [2024-11-17 14:37:08.209080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.167 [2024-11-17 14:37:08.209132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.167 [2024-11-17 14:37:08.209145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.167 [2024-11-17 14:37:08.209152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.167 [2024-11-17 14:37:08.209159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.167 [2024-11-17 14:37:08.209174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.167 qpair failed and we were unable to recover it. 00:27:19.167 [2024-11-17 14:37:08.219103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.167 [2024-11-17 14:37:08.219158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.167 [2024-11-17 14:37:08.219171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.167 [2024-11-17 14:37:08.219178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.167 [2024-11-17 14:37:08.219185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.167 [2024-11-17 14:37:08.219200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.167 qpair failed and we were unable to recover it. 00:27:19.167 [2024-11-17 14:37:08.229138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.167 [2024-11-17 14:37:08.229197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.167 [2024-11-17 14:37:08.229211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.167 [2024-11-17 14:37:08.229218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.167 [2024-11-17 14:37:08.229225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.167 [2024-11-17 14:37:08.229240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.167 qpair failed and we were unable to recover it. 00:27:19.167 [2024-11-17 14:37:08.239160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.167 [2024-11-17 14:37:08.239215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.167 [2024-11-17 14:37:08.239229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.167 [2024-11-17 14:37:08.239237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.167 [2024-11-17 14:37:08.239243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.167 [2024-11-17 14:37:08.239258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.167 qpair failed and we were unable to recover it. 00:27:19.167 [2024-11-17 14:37:08.249224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.167 [2024-11-17 14:37:08.249281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.167 [2024-11-17 14:37:08.249295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.167 [2024-11-17 14:37:08.249302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.167 [2024-11-17 14:37:08.249308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.167 [2024-11-17 14:37:08.249323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.167 qpair failed and we were unable to recover it. 00:27:19.167 [2024-11-17 14:37:08.259266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.167 [2024-11-17 14:37:08.259323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.167 [2024-11-17 14:37:08.259337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.167 [2024-11-17 14:37:08.259344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.167 [2024-11-17 14:37:08.259350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.167 [2024-11-17 14:37:08.259370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.167 qpair failed and we were unable to recover it. 00:27:19.167 [2024-11-17 14:37:08.269255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.167 [2024-11-17 14:37:08.269309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.167 [2024-11-17 14:37:08.269326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.167 [2024-11-17 14:37:08.269334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.167 [2024-11-17 14:37:08.269340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.167 [2024-11-17 14:37:08.269361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.167 qpair failed and we were unable to recover it. 00:27:19.167 [2024-11-17 14:37:08.279280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.167 [2024-11-17 14:37:08.279337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.167 [2024-11-17 14:37:08.279356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.168 [2024-11-17 14:37:08.279363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.168 [2024-11-17 14:37:08.279370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.168 [2024-11-17 14:37:08.279386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.168 qpair failed and we were unable to recover it. 00:27:19.168 [2024-11-17 14:37:08.289323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.168 [2024-11-17 14:37:08.289383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.168 [2024-11-17 14:37:08.289397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.168 [2024-11-17 14:37:08.289405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.168 [2024-11-17 14:37:08.289411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.168 [2024-11-17 14:37:08.289426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.168 qpair failed and we were unable to recover it. 00:27:19.168 [2024-11-17 14:37:08.299430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.168 [2024-11-17 14:37:08.299528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.168 [2024-11-17 14:37:08.299541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.168 [2024-11-17 14:37:08.299548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.168 [2024-11-17 14:37:08.299555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.168 [2024-11-17 14:37:08.299570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.168 qpair failed and we were unable to recover it. 00:27:19.168 [2024-11-17 14:37:08.309396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.168 [2024-11-17 14:37:08.309463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.168 [2024-11-17 14:37:08.309477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.168 [2024-11-17 14:37:08.309484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.168 [2024-11-17 14:37:08.309493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.168 [2024-11-17 14:37:08.309509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.168 qpair failed and we were unable to recover it. 00:27:19.168 [2024-11-17 14:37:08.319408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.168 [2024-11-17 14:37:08.319463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.168 [2024-11-17 14:37:08.319476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.168 [2024-11-17 14:37:08.319484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.168 [2024-11-17 14:37:08.319490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.168 [2024-11-17 14:37:08.319506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.168 qpair failed and we were unable to recover it. 00:27:19.168 [2024-11-17 14:37:08.329412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.168 [2024-11-17 14:37:08.329470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.168 [2024-11-17 14:37:08.329483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.168 [2024-11-17 14:37:08.329490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.168 [2024-11-17 14:37:08.329498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.168 [2024-11-17 14:37:08.329513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.168 qpair failed and we were unable to recover it. 00:27:19.168 [2024-11-17 14:37:08.339518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.168 [2024-11-17 14:37:08.339573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.168 [2024-11-17 14:37:08.339586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.168 [2024-11-17 14:37:08.339593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.168 [2024-11-17 14:37:08.339600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.168 [2024-11-17 14:37:08.339615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.168 qpair failed and we were unable to recover it. 00:27:19.168 [2024-11-17 14:37:08.349500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.168 [2024-11-17 14:37:08.349560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.168 [2024-11-17 14:37:08.349574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.168 [2024-11-17 14:37:08.349581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.168 [2024-11-17 14:37:08.349588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.168 [2024-11-17 14:37:08.349603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.168 qpair failed and we were unable to recover it. 00:27:19.168 [2024-11-17 14:37:08.359529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.168 [2024-11-17 14:37:08.359586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.168 [2024-11-17 14:37:08.359599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.168 [2024-11-17 14:37:08.359607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.168 [2024-11-17 14:37:08.359614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.168 [2024-11-17 14:37:08.359629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.168 qpair failed and we were unable to recover it. 00:27:19.168 [2024-11-17 14:37:08.369556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.168 [2024-11-17 14:37:08.369611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.168 [2024-11-17 14:37:08.369625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.168 [2024-11-17 14:37:08.369634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.168 [2024-11-17 14:37:08.369640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.168 [2024-11-17 14:37:08.369655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.168 qpair failed and we were unable to recover it. 00:27:19.168 [2024-11-17 14:37:08.379586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.168 [2024-11-17 14:37:08.379663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.168 [2024-11-17 14:37:08.379678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.168 [2024-11-17 14:37:08.379685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.168 [2024-11-17 14:37:08.379691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.168 [2024-11-17 14:37:08.379707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.168 qpair failed and we were unable to recover it. 00:27:19.429 [2024-11-17 14:37:08.389621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.429 [2024-11-17 14:37:08.389678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.429 [2024-11-17 14:37:08.389691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.429 [2024-11-17 14:37:08.389698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.429 [2024-11-17 14:37:08.389705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.429 [2024-11-17 14:37:08.389720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.429 qpair failed and we were unable to recover it. 00:27:19.429 [2024-11-17 14:37:08.399653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.429 [2024-11-17 14:37:08.399716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.429 [2024-11-17 14:37:08.399730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.429 [2024-11-17 14:37:08.399737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.429 [2024-11-17 14:37:08.399744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.429 [2024-11-17 14:37:08.399758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.429 qpair failed and we were unable to recover it. 00:27:19.429 [2024-11-17 14:37:08.409652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.429 [2024-11-17 14:37:08.409707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.429 [2024-11-17 14:37:08.409720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.429 [2024-11-17 14:37:08.409727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.429 [2024-11-17 14:37:08.409734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.429 [2024-11-17 14:37:08.409749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.429 qpair failed and we were unable to recover it. 00:27:19.429 [2024-11-17 14:37:08.419697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.429 [2024-11-17 14:37:08.419799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.429 [2024-11-17 14:37:08.419813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.429 [2024-11-17 14:37:08.419820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.429 [2024-11-17 14:37:08.419826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.429 [2024-11-17 14:37:08.419842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.429 qpair failed and we were unable to recover it. 00:27:19.429 [2024-11-17 14:37:08.429741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.429 [2024-11-17 14:37:08.429805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.429 [2024-11-17 14:37:08.429819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.429 [2024-11-17 14:37:08.429827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.429 [2024-11-17 14:37:08.429834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.429 [2024-11-17 14:37:08.429849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.429 qpair failed and we were unable to recover it. 00:27:19.429 [2024-11-17 14:37:08.439742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.429 [2024-11-17 14:37:08.439799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.429 [2024-11-17 14:37:08.439812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.429 [2024-11-17 14:37:08.439822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.429 [2024-11-17 14:37:08.439829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.429 [2024-11-17 14:37:08.439845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.429 qpair failed and we were unable to recover it. 00:27:19.429 [2024-11-17 14:37:08.449760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.429 [2024-11-17 14:37:08.449841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.429 [2024-11-17 14:37:08.449854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.429 [2024-11-17 14:37:08.449861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.429 [2024-11-17 14:37:08.449867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.429 [2024-11-17 14:37:08.449882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.429 qpair failed and we were unable to recover it. 00:27:19.429 [2024-11-17 14:37:08.459800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.429 [2024-11-17 14:37:08.459855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.429 [2024-11-17 14:37:08.459868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.429 [2024-11-17 14:37:08.459875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.429 [2024-11-17 14:37:08.459882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.429 [2024-11-17 14:37:08.459897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.429 qpair failed and we were unable to recover it. 00:27:19.429 [2024-11-17 14:37:08.469833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.429 [2024-11-17 14:37:08.469887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.429 [2024-11-17 14:37:08.469900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.429 [2024-11-17 14:37:08.469907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.429 [2024-11-17 14:37:08.469914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.429 [2024-11-17 14:37:08.469929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.429 qpair failed and we were unable to recover it. 00:27:19.429 [2024-11-17 14:37:08.479847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.429 [2024-11-17 14:37:08.479898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.429 [2024-11-17 14:37:08.479912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.429 [2024-11-17 14:37:08.479919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.429 [2024-11-17 14:37:08.479926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.429 [2024-11-17 14:37:08.479944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.429 qpair failed and we were unable to recover it. 00:27:19.429 [2024-11-17 14:37:08.489881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.429 [2024-11-17 14:37:08.489935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.429 [2024-11-17 14:37:08.489949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.429 [2024-11-17 14:37:08.489956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.430 [2024-11-17 14:37:08.489963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.430 [2024-11-17 14:37:08.489978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.430 qpair failed and we were unable to recover it. 00:27:19.430 [2024-11-17 14:37:08.499954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.430 [2024-11-17 14:37:08.500023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.430 [2024-11-17 14:37:08.500038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.430 [2024-11-17 14:37:08.500045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.430 [2024-11-17 14:37:08.500051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.430 [2024-11-17 14:37:08.500066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.430 qpair failed and we were unable to recover it. 00:27:19.430 [2024-11-17 14:37:08.509921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.430 [2024-11-17 14:37:08.509985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.430 [2024-11-17 14:37:08.510000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.430 [2024-11-17 14:37:08.510008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.430 [2024-11-17 14:37:08.510014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.430 [2024-11-17 14:37:08.510030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.430 qpair failed and we were unable to recover it. 00:27:19.430 [2024-11-17 14:37:08.519962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.430 [2024-11-17 14:37:08.520020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.430 [2024-11-17 14:37:08.520034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.430 [2024-11-17 14:37:08.520041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.430 [2024-11-17 14:37:08.520048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.430 [2024-11-17 14:37:08.520063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.430 qpair failed and we were unable to recover it. 00:27:19.430 [2024-11-17 14:37:08.530001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.430 [2024-11-17 14:37:08.530060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.430 [2024-11-17 14:37:08.530073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.430 [2024-11-17 14:37:08.530081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.430 [2024-11-17 14:37:08.530087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.430 [2024-11-17 14:37:08.530103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.430 qpair failed and we were unable to recover it. 00:27:19.430 [2024-11-17 14:37:08.540036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.430 [2024-11-17 14:37:08.540094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.430 [2024-11-17 14:37:08.540108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.430 [2024-11-17 14:37:08.540115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.430 [2024-11-17 14:37:08.540122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.430 [2024-11-17 14:37:08.540137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.430 qpair failed and we were unable to recover it. 00:27:19.430 [2024-11-17 14:37:08.550036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.430 [2024-11-17 14:37:08.550088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.430 [2024-11-17 14:37:08.550101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.430 [2024-11-17 14:37:08.550108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.430 [2024-11-17 14:37:08.550115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.430 [2024-11-17 14:37:08.550130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.430 qpair failed and we were unable to recover it. 00:27:19.430 [2024-11-17 14:37:08.560083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.430 [2024-11-17 14:37:08.560138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.430 [2024-11-17 14:37:08.560152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.430 [2024-11-17 14:37:08.560159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.430 [2024-11-17 14:37:08.560166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.430 [2024-11-17 14:37:08.560181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.430 qpair failed and we were unable to recover it. 00:27:19.430 [2024-11-17 14:37:08.570112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.430 [2024-11-17 14:37:08.570168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.430 [2024-11-17 14:37:08.570182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.430 [2024-11-17 14:37:08.570193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.430 [2024-11-17 14:37:08.570199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.430 [2024-11-17 14:37:08.570214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.430 qpair failed and we were unable to recover it. 00:27:19.430 [2024-11-17 14:37:08.580150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.430 [2024-11-17 14:37:08.580203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.430 [2024-11-17 14:37:08.580217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.430 [2024-11-17 14:37:08.580224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.430 [2024-11-17 14:37:08.580231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.430 [2024-11-17 14:37:08.580247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.430 qpair failed and we were unable to recover it. 00:27:19.430 [2024-11-17 14:37:08.590167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.430 [2024-11-17 14:37:08.590223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.430 [2024-11-17 14:37:08.590237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.430 [2024-11-17 14:37:08.590244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.430 [2024-11-17 14:37:08.590250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.430 [2024-11-17 14:37:08.590266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.430 qpair failed and we were unable to recover it. 00:27:19.430 [2024-11-17 14:37:08.600196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.430 [2024-11-17 14:37:08.600251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.430 [2024-11-17 14:37:08.600264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.430 [2024-11-17 14:37:08.600271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.430 [2024-11-17 14:37:08.600278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.430 [2024-11-17 14:37:08.600294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.430 qpair failed and we were unable to recover it. 00:27:19.430 [2024-11-17 14:37:08.610212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.430 [2024-11-17 14:37:08.610274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.430 [2024-11-17 14:37:08.610289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.430 [2024-11-17 14:37:08.610297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.430 [2024-11-17 14:37:08.610304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.430 [2024-11-17 14:37:08.610323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.430 qpair failed and we were unable to recover it. 00:27:19.430 [2024-11-17 14:37:08.620264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.430 [2024-11-17 14:37:08.620357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.430 [2024-11-17 14:37:08.620371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.430 [2024-11-17 14:37:08.620378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.430 [2024-11-17 14:37:08.620384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.431 [2024-11-17 14:37:08.620400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.431 qpair failed and we were unable to recover it. 00:27:19.431 [2024-11-17 14:37:08.630286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.431 [2024-11-17 14:37:08.630342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.431 [2024-11-17 14:37:08.630359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.431 [2024-11-17 14:37:08.630367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.431 [2024-11-17 14:37:08.630373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.431 [2024-11-17 14:37:08.630389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.431 qpair failed and we were unable to recover it. 00:27:19.431 [2024-11-17 14:37:08.640327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.431 [2024-11-17 14:37:08.640386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.431 [2024-11-17 14:37:08.640400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.431 [2024-11-17 14:37:08.640408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.431 [2024-11-17 14:37:08.640414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.431 [2024-11-17 14:37:08.640429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.431 qpair failed and we were unable to recover it. 00:27:19.691 [2024-11-17 14:37:08.650346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.691 [2024-11-17 14:37:08.650406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.691 [2024-11-17 14:37:08.650420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.691 [2024-11-17 14:37:08.650428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.691 [2024-11-17 14:37:08.650435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.691 [2024-11-17 14:37:08.650450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.691 qpair failed and we were unable to recover it. 00:27:19.691 [2024-11-17 14:37:08.660401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.691 [2024-11-17 14:37:08.660460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.691 [2024-11-17 14:37:08.660475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.691 [2024-11-17 14:37:08.660482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.691 [2024-11-17 14:37:08.660489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.691 [2024-11-17 14:37:08.660505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.691 qpair failed and we were unable to recover it. 00:27:19.692 [2024-11-17 14:37:08.670465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.692 [2024-11-17 14:37:08.670534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.692 [2024-11-17 14:37:08.670550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.692 [2024-11-17 14:37:08.670558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.692 [2024-11-17 14:37:08.670565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.692 [2024-11-17 14:37:08.670580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.692 qpair failed and we were unable to recover it. 00:27:19.692 [2024-11-17 14:37:08.680440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.692 [2024-11-17 14:37:08.680495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.692 [2024-11-17 14:37:08.680509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.692 [2024-11-17 14:37:08.680516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.692 [2024-11-17 14:37:08.680523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.692 [2024-11-17 14:37:08.680538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.692 qpair failed and we were unable to recover it. 00:27:19.692 [2024-11-17 14:37:08.690469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.692 [2024-11-17 14:37:08.690526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.692 [2024-11-17 14:37:08.690539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.692 [2024-11-17 14:37:08.690547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.692 [2024-11-17 14:37:08.690553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.692 [2024-11-17 14:37:08.690568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.692 qpair failed and we were unable to recover it. 00:27:19.692 [2024-11-17 14:37:08.700506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.692 [2024-11-17 14:37:08.700565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.692 [2024-11-17 14:37:08.700581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.692 [2024-11-17 14:37:08.700589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.692 [2024-11-17 14:37:08.700595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.692 [2024-11-17 14:37:08.700610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.692 qpair failed and we were unable to recover it. 00:27:19.692 [2024-11-17 14:37:08.710579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.692 [2024-11-17 14:37:08.710638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.692 [2024-11-17 14:37:08.710653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.692 [2024-11-17 14:37:08.710661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.692 [2024-11-17 14:37:08.710668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.692 [2024-11-17 14:37:08.710683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.692 qpair failed and we were unable to recover it. 00:27:19.692 [2024-11-17 14:37:08.720560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.692 [2024-11-17 14:37:08.720617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.692 [2024-11-17 14:37:08.720631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.692 [2024-11-17 14:37:08.720638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.692 [2024-11-17 14:37:08.720645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.692 [2024-11-17 14:37:08.720660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.692 qpair failed and we were unable to recover it. 00:27:19.692 [2024-11-17 14:37:08.730573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.692 [2024-11-17 14:37:08.730628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.692 [2024-11-17 14:37:08.730642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.692 [2024-11-17 14:37:08.730649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.692 [2024-11-17 14:37:08.730656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.692 [2024-11-17 14:37:08.730671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.692 qpair failed and we were unable to recover it. 00:27:19.692 [2024-11-17 14:37:08.740624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.692 [2024-11-17 14:37:08.740682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.692 [2024-11-17 14:37:08.740696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.692 [2024-11-17 14:37:08.740703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.692 [2024-11-17 14:37:08.740714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.692 [2024-11-17 14:37:08.740729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.692 qpair failed and we were unable to recover it. 00:27:19.692 [2024-11-17 14:37:08.750635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.692 [2024-11-17 14:37:08.750691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.692 [2024-11-17 14:37:08.750704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.692 [2024-11-17 14:37:08.750711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.692 [2024-11-17 14:37:08.750718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.692 [2024-11-17 14:37:08.750733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.692 qpair failed and we were unable to recover it. 00:27:19.692 [2024-11-17 14:37:08.760666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.692 [2024-11-17 14:37:08.760714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.692 [2024-11-17 14:37:08.760728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.692 [2024-11-17 14:37:08.760735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.692 [2024-11-17 14:37:08.760741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.692 [2024-11-17 14:37:08.760757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.692 qpair failed and we were unable to recover it. 00:27:19.692 [2024-11-17 14:37:08.770695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.692 [2024-11-17 14:37:08.770754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.692 [2024-11-17 14:37:08.770767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.692 [2024-11-17 14:37:08.770774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.692 [2024-11-17 14:37:08.770781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.692 [2024-11-17 14:37:08.770795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.692 qpair failed and we were unable to recover it. 00:27:19.692 [2024-11-17 14:37:08.780730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.692 [2024-11-17 14:37:08.780799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.692 [2024-11-17 14:37:08.780813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.692 [2024-11-17 14:37:08.780820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.692 [2024-11-17 14:37:08.780826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.692 [2024-11-17 14:37:08.780841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.692 qpair failed and we were unable to recover it. 00:27:19.692 [2024-11-17 14:37:08.790708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.692 [2024-11-17 14:37:08.790774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.692 [2024-11-17 14:37:08.790789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.692 [2024-11-17 14:37:08.790796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.692 [2024-11-17 14:37:08.790802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.692 [2024-11-17 14:37:08.790817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.692 qpair failed and we were unable to recover it. 00:27:19.693 [2024-11-17 14:37:08.800789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.693 [2024-11-17 14:37:08.800867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.693 [2024-11-17 14:37:08.800881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.693 [2024-11-17 14:37:08.800888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.693 [2024-11-17 14:37:08.800895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.693 [2024-11-17 14:37:08.800910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.693 qpair failed and we were unable to recover it. 00:27:19.693 [2024-11-17 14:37:08.810807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.693 [2024-11-17 14:37:08.810859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.693 [2024-11-17 14:37:08.810873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.693 [2024-11-17 14:37:08.810880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.693 [2024-11-17 14:37:08.810887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.693 [2024-11-17 14:37:08.810902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.693 qpair failed and we were unable to recover it. 00:27:19.693 [2024-11-17 14:37:08.820872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.693 [2024-11-17 14:37:08.820931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.693 [2024-11-17 14:37:08.820945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.693 [2024-11-17 14:37:08.820952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.693 [2024-11-17 14:37:08.820958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.693 [2024-11-17 14:37:08.820974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.693 qpair failed and we were unable to recover it. 00:27:19.693 [2024-11-17 14:37:08.830867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.693 [2024-11-17 14:37:08.830923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.693 [2024-11-17 14:37:08.830940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.693 [2024-11-17 14:37:08.830947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.693 [2024-11-17 14:37:08.830954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.693 [2024-11-17 14:37:08.830969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.693 qpair failed and we were unable to recover it. 00:27:19.693 [2024-11-17 14:37:08.840896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.693 [2024-11-17 14:37:08.840954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.693 [2024-11-17 14:37:08.840967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.693 [2024-11-17 14:37:08.840975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.693 [2024-11-17 14:37:08.840981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.693 [2024-11-17 14:37:08.840997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.693 qpair failed and we were unable to recover it. 00:27:19.693 [2024-11-17 14:37:08.850921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.693 [2024-11-17 14:37:08.850993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.693 [2024-11-17 14:37:08.851006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.693 [2024-11-17 14:37:08.851013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.693 [2024-11-17 14:37:08.851020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.693 [2024-11-17 14:37:08.851035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.693 qpair failed and we were unable to recover it. 00:27:19.693 [2024-11-17 14:37:08.860981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.693 [2024-11-17 14:37:08.861041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.693 [2024-11-17 14:37:08.861056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.693 [2024-11-17 14:37:08.861063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.693 [2024-11-17 14:37:08.861070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.693 [2024-11-17 14:37:08.861086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.693 qpair failed and we were unable to recover it. 00:27:19.693 [2024-11-17 14:37:08.871019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.693 [2024-11-17 14:37:08.871075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.693 [2024-11-17 14:37:08.871089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.693 [2024-11-17 14:37:08.871096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.693 [2024-11-17 14:37:08.871106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.693 [2024-11-17 14:37:08.871122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.693 qpair failed and we were unable to recover it. 00:27:19.693 [2024-11-17 14:37:08.881050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.693 [2024-11-17 14:37:08.881122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.693 [2024-11-17 14:37:08.881136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.693 [2024-11-17 14:37:08.881144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.693 [2024-11-17 14:37:08.881150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.693 [2024-11-17 14:37:08.881165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.693 qpair failed and we were unable to recover it. 00:27:19.693 [2024-11-17 14:37:08.891031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.693 [2024-11-17 14:37:08.891086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.693 [2024-11-17 14:37:08.891100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.693 [2024-11-17 14:37:08.891107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.693 [2024-11-17 14:37:08.891114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.693 [2024-11-17 14:37:08.891129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.693 qpair failed and we were unable to recover it. 00:27:19.693 [2024-11-17 14:37:08.901089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.693 [2024-11-17 14:37:08.901144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.693 [2024-11-17 14:37:08.901158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.693 [2024-11-17 14:37:08.901165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.693 [2024-11-17 14:37:08.901172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.693 [2024-11-17 14:37:08.901187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.693 qpair failed and we were unable to recover it. 00:27:19.956 [2024-11-17 14:37:08.911114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.956 [2024-11-17 14:37:08.911170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.956 [2024-11-17 14:37:08.911184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.956 [2024-11-17 14:37:08.911191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.956 [2024-11-17 14:37:08.911198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.956 [2024-11-17 14:37:08.911212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-11-17 14:37:08.921141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.956 [2024-11-17 14:37:08.921199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.956 [2024-11-17 14:37:08.921213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.956 [2024-11-17 14:37:08.921221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.956 [2024-11-17 14:37:08.921227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.956 [2024-11-17 14:37:08.921242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-11-17 14:37:08.931181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.956 [2024-11-17 14:37:08.931245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.956 [2024-11-17 14:37:08.931259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.956 [2024-11-17 14:37:08.931267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.956 [2024-11-17 14:37:08.931273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.956 [2024-11-17 14:37:08.931289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-11-17 14:37:08.941198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.956 [2024-11-17 14:37:08.941267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.956 [2024-11-17 14:37:08.941281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.956 [2024-11-17 14:37:08.941289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.956 [2024-11-17 14:37:08.941295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.956 [2024-11-17 14:37:08.941310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-11-17 14:37:08.951239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.956 [2024-11-17 14:37:08.951294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.956 [2024-11-17 14:37:08.951308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.956 [2024-11-17 14:37:08.951315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.956 [2024-11-17 14:37:08.951322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.956 [2024-11-17 14:37:08.951338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-11-17 14:37:08.961283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.956 [2024-11-17 14:37:08.961349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.956 [2024-11-17 14:37:08.961367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.956 [2024-11-17 14:37:08.961374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.956 [2024-11-17 14:37:08.961381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.956 [2024-11-17 14:37:08.961396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-11-17 14:37:08.971244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.956 [2024-11-17 14:37:08.971299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.956 [2024-11-17 14:37:08.971314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.956 [2024-11-17 14:37:08.971322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.956 [2024-11-17 14:37:08.971329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.956 [2024-11-17 14:37:08.971345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-11-17 14:37:08.981301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.956 [2024-11-17 14:37:08.981360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.956 [2024-11-17 14:37:08.981374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.956 [2024-11-17 14:37:08.981382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.956 [2024-11-17 14:37:08.981389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.956 [2024-11-17 14:37:08.981404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-11-17 14:37:08.991312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.956 [2024-11-17 14:37:08.991371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.956 [2024-11-17 14:37:08.991385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.956 [2024-11-17 14:37:08.991393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.956 [2024-11-17 14:37:08.991399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.956 [2024-11-17 14:37:08.991414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-11-17 14:37:09.001371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.956 [2024-11-17 14:37:09.001437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.956 [2024-11-17 14:37:09.001451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.956 [2024-11-17 14:37:09.001462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.956 [2024-11-17 14:37:09.001468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.956 [2024-11-17 14:37:09.001484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.956 qpair failed and we were unable to recover it. 00:27:19.956 [2024-11-17 14:37:09.011369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.956 [2024-11-17 14:37:09.011438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.956 [2024-11-17 14:37:09.011452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.956 [2024-11-17 14:37:09.011459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.957 [2024-11-17 14:37:09.011465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.957 [2024-11-17 14:37:09.011481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-11-17 14:37:09.021357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.957 [2024-11-17 14:37:09.021413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.957 [2024-11-17 14:37:09.021429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.957 [2024-11-17 14:37:09.021436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.957 [2024-11-17 14:37:09.021444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.957 [2024-11-17 14:37:09.021459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-11-17 14:37:09.031361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.957 [2024-11-17 14:37:09.031426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.957 [2024-11-17 14:37:09.031440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.957 [2024-11-17 14:37:09.031447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.957 [2024-11-17 14:37:09.031453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.957 [2024-11-17 14:37:09.031469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-11-17 14:37:09.041509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.957 [2024-11-17 14:37:09.041567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.957 [2024-11-17 14:37:09.041581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.957 [2024-11-17 14:37:09.041589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.957 [2024-11-17 14:37:09.041595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.957 [2024-11-17 14:37:09.041614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-11-17 14:37:09.051498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.957 [2024-11-17 14:37:09.051566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.957 [2024-11-17 14:37:09.051580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.957 [2024-11-17 14:37:09.051587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.957 [2024-11-17 14:37:09.051594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.957 [2024-11-17 14:37:09.051609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-11-17 14:37:09.061544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.957 [2024-11-17 14:37:09.061637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.957 [2024-11-17 14:37:09.061650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.957 [2024-11-17 14:37:09.061657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.957 [2024-11-17 14:37:09.061663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.957 [2024-11-17 14:37:09.061678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-11-17 14:37:09.071556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.957 [2024-11-17 14:37:09.071611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.957 [2024-11-17 14:37:09.071625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.957 [2024-11-17 14:37:09.071632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.957 [2024-11-17 14:37:09.071638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.957 [2024-11-17 14:37:09.071653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-11-17 14:37:09.081621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.957 [2024-11-17 14:37:09.081679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.957 [2024-11-17 14:37:09.081692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.957 [2024-11-17 14:37:09.081699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.957 [2024-11-17 14:37:09.081706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.957 [2024-11-17 14:37:09.081720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-11-17 14:37:09.091613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.957 [2024-11-17 14:37:09.091677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.957 [2024-11-17 14:37:09.091692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.957 [2024-11-17 14:37:09.091700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.957 [2024-11-17 14:37:09.091706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.957 [2024-11-17 14:37:09.091721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-11-17 14:37:09.101663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.957 [2024-11-17 14:37:09.101721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.957 [2024-11-17 14:37:09.101735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.957 [2024-11-17 14:37:09.101742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.957 [2024-11-17 14:37:09.101749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.957 [2024-11-17 14:37:09.101764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-11-17 14:37:09.111675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.957 [2024-11-17 14:37:09.111728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.957 [2024-11-17 14:37:09.111742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.957 [2024-11-17 14:37:09.111749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.957 [2024-11-17 14:37:09.111755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.957 [2024-11-17 14:37:09.111770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-11-17 14:37:09.121694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.957 [2024-11-17 14:37:09.121773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.957 [2024-11-17 14:37:09.121786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.957 [2024-11-17 14:37:09.121793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.957 [2024-11-17 14:37:09.121800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.957 [2024-11-17 14:37:09.121815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-11-17 14:37:09.131730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.957 [2024-11-17 14:37:09.131782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.957 [2024-11-17 14:37:09.131800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.957 [2024-11-17 14:37:09.131807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.957 [2024-11-17 14:37:09.131814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.957 [2024-11-17 14:37:09.131830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-11-17 14:37:09.141773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.957 [2024-11-17 14:37:09.141830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.957 [2024-11-17 14:37:09.141843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.957 [2024-11-17 14:37:09.141850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.957 [2024-11-17 14:37:09.141857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.957 [2024-11-17 14:37:09.141872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-11-17 14:37:09.151793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.957 [2024-11-17 14:37:09.151848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.957 [2024-11-17 14:37:09.151862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.957 [2024-11-17 14:37:09.151869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.957 [2024-11-17 14:37:09.151876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.957 [2024-11-17 14:37:09.151891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-11-17 14:37:09.161835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.957 [2024-11-17 14:37:09.161906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.957 [2024-11-17 14:37:09.161920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.957 [2024-11-17 14:37:09.161927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.957 [2024-11-17 14:37:09.161933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.957 [2024-11-17 14:37:09.161948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.957 qpair failed and we were unable to recover it. 00:27:19.957 [2024-11-17 14:37:09.171882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.957 [2024-11-17 14:37:09.171939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.957 [2024-11-17 14:37:09.171953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.957 [2024-11-17 14:37:09.171960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.957 [2024-11-17 14:37:09.171968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:19.957 [2024-11-17 14:37:09.171987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.957 qpair failed and we were unable to recover it. 00:27:20.217 [2024-11-17 14:37:09.181856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.217 [2024-11-17 14:37:09.181941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.217 [2024-11-17 14:37:09.181955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.217 [2024-11-17 14:37:09.181962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.217 [2024-11-17 14:37:09.181969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.217 [2024-11-17 14:37:09.181984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.217 qpair failed and we were unable to recover it. 00:27:20.217 [2024-11-17 14:37:09.191902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.217 [2024-11-17 14:37:09.191960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.217 [2024-11-17 14:37:09.191974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.217 [2024-11-17 14:37:09.191981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.217 [2024-11-17 14:37:09.191987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.217 [2024-11-17 14:37:09.192003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.217 qpair failed and we were unable to recover it. 00:27:20.217 [2024-11-17 14:37:09.201857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.217 [2024-11-17 14:37:09.201907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.217 [2024-11-17 14:37:09.201922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.217 [2024-11-17 14:37:09.201931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.217 [2024-11-17 14:37:09.201938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.217 [2024-11-17 14:37:09.201955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.217 qpair failed and we were unable to recover it. 00:27:20.217 [2024-11-17 14:37:09.211884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.217 [2024-11-17 14:37:09.211952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.217 [2024-11-17 14:37:09.211966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.217 [2024-11-17 14:37:09.211973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.217 [2024-11-17 14:37:09.211979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.217 [2024-11-17 14:37:09.211994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.217 qpair failed and we were unable to recover it. 00:27:20.217 [2024-11-17 14:37:09.221923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.217 [2024-11-17 14:37:09.221976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.217 [2024-11-17 14:37:09.221990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.217 [2024-11-17 14:37:09.221997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.217 [2024-11-17 14:37:09.222002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.217 [2024-11-17 14:37:09.222019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.217 qpair failed and we were unable to recover it. 00:27:20.217 [2024-11-17 14:37:09.231963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.217 [2024-11-17 14:37:09.232060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.217 [2024-11-17 14:37:09.232074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.217 [2024-11-17 14:37:09.232082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.217 [2024-11-17 14:37:09.232088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.217 [2024-11-17 14:37:09.232104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.217 qpair failed and we were unable to recover it. 00:27:20.217 [2024-11-17 14:37:09.241982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.217 [2024-11-17 14:37:09.242079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.217 [2024-11-17 14:37:09.242094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.217 [2024-11-17 14:37:09.242101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.217 [2024-11-17 14:37:09.242107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.217 [2024-11-17 14:37:09.242122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.217 qpair failed and we were unable to recover it. 00:27:20.217 [2024-11-17 14:37:09.252034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.217 [2024-11-17 14:37:09.252124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.217 [2024-11-17 14:37:09.252138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.217 [2024-11-17 14:37:09.252145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.217 [2024-11-17 14:37:09.252151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.217 [2024-11-17 14:37:09.252166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.217 qpair failed and we were unable to recover it. 00:27:20.217 [2024-11-17 14:37:09.262037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.218 [2024-11-17 14:37:09.262100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.218 [2024-11-17 14:37:09.262118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.218 [2024-11-17 14:37:09.262126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.218 [2024-11-17 14:37:09.262132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.218 [2024-11-17 14:37:09.262147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.218 qpair failed and we were unable to recover it. 00:27:20.218 [2024-11-17 14:37:09.272066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.218 [2024-11-17 14:37:09.272124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.218 [2024-11-17 14:37:09.272137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.218 [2024-11-17 14:37:09.272145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.218 [2024-11-17 14:37:09.272152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.218 [2024-11-17 14:37:09.272166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.218 qpair failed and we were unable to recover it. 00:27:20.218 [2024-11-17 14:37:09.282102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.218 [2024-11-17 14:37:09.282156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.218 [2024-11-17 14:37:09.282169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.218 [2024-11-17 14:37:09.282176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.218 [2024-11-17 14:37:09.282182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.218 [2024-11-17 14:37:09.282197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.218 qpair failed and we were unable to recover it. 00:27:20.218 [2024-11-17 14:37:09.292226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.218 [2024-11-17 14:37:09.292320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.218 [2024-11-17 14:37:09.292334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.218 [2024-11-17 14:37:09.292341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.218 [2024-11-17 14:37:09.292347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.218 [2024-11-17 14:37:09.292368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.218 qpair failed and we were unable to recover it. 00:27:20.218 [2024-11-17 14:37:09.302163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.218 [2024-11-17 14:37:09.302225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.218 [2024-11-17 14:37:09.302239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.218 [2024-11-17 14:37:09.302246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.218 [2024-11-17 14:37:09.302256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.218 [2024-11-17 14:37:09.302271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.218 qpair failed and we were unable to recover it. 00:27:20.218 [2024-11-17 14:37:09.312292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.218 [2024-11-17 14:37:09.312358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.218 [2024-11-17 14:37:09.312374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.218 [2024-11-17 14:37:09.312381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.218 [2024-11-17 14:37:09.312388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.218 [2024-11-17 14:37:09.312404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.218 qpair failed and we were unable to recover it. 00:27:20.218 [2024-11-17 14:37:09.322279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.218 [2024-11-17 14:37:09.322336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.218 [2024-11-17 14:37:09.322354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.218 [2024-11-17 14:37:09.322362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.218 [2024-11-17 14:37:09.322368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.218 [2024-11-17 14:37:09.322384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.218 qpair failed and we were unable to recover it. 00:27:20.218 [2024-11-17 14:37:09.332297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.218 [2024-11-17 14:37:09.332348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.218 [2024-11-17 14:37:09.332374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.218 [2024-11-17 14:37:09.332381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.218 [2024-11-17 14:37:09.332387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.218 [2024-11-17 14:37:09.332403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.218 qpair failed and we were unable to recover it. 00:27:20.218 [2024-11-17 14:37:09.342344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.218 [2024-11-17 14:37:09.342418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.218 [2024-11-17 14:37:09.342432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.218 [2024-11-17 14:37:09.342440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.218 [2024-11-17 14:37:09.342447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.218 [2024-11-17 14:37:09.342462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.218 qpair failed and we were unable to recover it. 00:27:20.218 [2024-11-17 14:37:09.352330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.218 [2024-11-17 14:37:09.352440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.218 [2024-11-17 14:37:09.352455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.218 [2024-11-17 14:37:09.352462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.218 [2024-11-17 14:37:09.352469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.218 [2024-11-17 14:37:09.352485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.218 qpair failed and we were unable to recover it. 00:27:20.218 [2024-11-17 14:37:09.362418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.218 [2024-11-17 14:37:09.362495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.218 [2024-11-17 14:37:09.362510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.218 [2024-11-17 14:37:09.362517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.218 [2024-11-17 14:37:09.362523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.218 [2024-11-17 14:37:09.362538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.218 qpair failed and we were unable to recover it. 00:27:20.218 [2024-11-17 14:37:09.372461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.218 [2024-11-17 14:37:09.372517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.218 [2024-11-17 14:37:09.372532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.218 [2024-11-17 14:37:09.372539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.218 [2024-11-17 14:37:09.372546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.218 [2024-11-17 14:37:09.372561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.218 qpair failed and we were unable to recover it. 00:27:20.218 [2024-11-17 14:37:09.382468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.218 [2024-11-17 14:37:09.382524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.218 [2024-11-17 14:37:09.382538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.218 [2024-11-17 14:37:09.382545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.218 [2024-11-17 14:37:09.382552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.218 [2024-11-17 14:37:09.382567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.218 qpair failed and we were unable to recover it. 00:27:20.218 [2024-11-17 14:37:09.392492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.218 [2024-11-17 14:37:09.392552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.218 [2024-11-17 14:37:09.392570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.218 [2024-11-17 14:37:09.392577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.218 [2024-11-17 14:37:09.392584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.218 [2024-11-17 14:37:09.392600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.218 qpair failed and we were unable to recover it. 00:27:20.218 [2024-11-17 14:37:09.402444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.218 [2024-11-17 14:37:09.402500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.218 [2024-11-17 14:37:09.402514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.218 [2024-11-17 14:37:09.402522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.218 [2024-11-17 14:37:09.402528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.218 [2024-11-17 14:37:09.402545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.218 qpair failed and we were unable to recover it. 00:27:20.218 [2024-11-17 14:37:09.412566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.218 [2024-11-17 14:37:09.412621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.218 [2024-11-17 14:37:09.412634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.218 [2024-11-17 14:37:09.412641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.218 [2024-11-17 14:37:09.412648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.218 [2024-11-17 14:37:09.412663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.218 qpair failed and we were unable to recover it. 00:27:20.218 [2024-11-17 14:37:09.422581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.218 [2024-11-17 14:37:09.422641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.218 [2024-11-17 14:37:09.422655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.218 [2024-11-17 14:37:09.422661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.218 [2024-11-17 14:37:09.422668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.218 [2024-11-17 14:37:09.422683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.218 qpair failed and we were unable to recover it. 00:27:20.218 [2024-11-17 14:37:09.432543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.218 [2024-11-17 14:37:09.432605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.218 [2024-11-17 14:37:09.432619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.218 [2024-11-17 14:37:09.432626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.218 [2024-11-17 14:37:09.432636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.218 [2024-11-17 14:37:09.432652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.218 qpair failed and we were unable to recover it. 00:27:20.477 [2024-11-17 14:37:09.442555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.478 [2024-11-17 14:37:09.442615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.478 [2024-11-17 14:37:09.442629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.478 [2024-11-17 14:37:09.442636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.478 [2024-11-17 14:37:09.442642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.478 [2024-11-17 14:37:09.442658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.478 qpair failed and we were unable to recover it. 00:27:20.478 [2024-11-17 14:37:09.452648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.478 [2024-11-17 14:37:09.452699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.478 [2024-11-17 14:37:09.452713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.478 [2024-11-17 14:37:09.452721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.478 [2024-11-17 14:37:09.452728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.478 [2024-11-17 14:37:09.452743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.478 qpair failed and we were unable to recover it. 00:27:20.478 [2024-11-17 14:37:09.462682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.478 [2024-11-17 14:37:09.462738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.478 [2024-11-17 14:37:09.462752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.478 [2024-11-17 14:37:09.462759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.478 [2024-11-17 14:37:09.462766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.478 [2024-11-17 14:37:09.462781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.478 qpair failed and we were unable to recover it. 00:27:20.478 [2024-11-17 14:37:09.472674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.478 [2024-11-17 14:37:09.472769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.478 [2024-11-17 14:37:09.472783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.478 [2024-11-17 14:37:09.472791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.478 [2024-11-17 14:37:09.472797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.478 [2024-11-17 14:37:09.472812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.478 qpair failed and we were unable to recover it. 00:27:20.478 [2024-11-17 14:37:09.482710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.478 [2024-11-17 14:37:09.482766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.478 [2024-11-17 14:37:09.482780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.478 [2024-11-17 14:37:09.482787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.478 [2024-11-17 14:37:09.482793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.478 [2024-11-17 14:37:09.482808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.478 qpair failed and we were unable to recover it. 00:27:20.478 [2024-11-17 14:37:09.492765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.478 [2024-11-17 14:37:09.492838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.478 [2024-11-17 14:37:09.492854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.478 [2024-11-17 14:37:09.492861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.478 [2024-11-17 14:37:09.492868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.478 [2024-11-17 14:37:09.492883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.478 qpair failed and we were unable to recover it. 00:27:20.478 [2024-11-17 14:37:09.502725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.478 [2024-11-17 14:37:09.502818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.478 [2024-11-17 14:37:09.502831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.478 [2024-11-17 14:37:09.502839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.478 [2024-11-17 14:37:09.502845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.478 [2024-11-17 14:37:09.502860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.478 qpair failed and we were unable to recover it. 00:27:20.478 [2024-11-17 14:37:09.512801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.478 [2024-11-17 14:37:09.512883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.478 [2024-11-17 14:37:09.512897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.478 [2024-11-17 14:37:09.512904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.478 [2024-11-17 14:37:09.512910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.478 [2024-11-17 14:37:09.512924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.478 qpair failed and we were unable to recover it. 00:27:20.478 [2024-11-17 14:37:09.522803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.478 [2024-11-17 14:37:09.522858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.478 [2024-11-17 14:37:09.522872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.478 [2024-11-17 14:37:09.522879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.478 [2024-11-17 14:37:09.522885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.478 [2024-11-17 14:37:09.522900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.478 qpair failed and we were unable to recover it. 00:27:20.478 [2024-11-17 14:37:09.532825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.478 [2024-11-17 14:37:09.532881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.478 [2024-11-17 14:37:09.532895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.478 [2024-11-17 14:37:09.532902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.478 [2024-11-17 14:37:09.532909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.478 [2024-11-17 14:37:09.532924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.478 qpair failed and we were unable to recover it. 00:27:20.478 [2024-11-17 14:37:09.542967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.478 [2024-11-17 14:37:09.543039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.478 [2024-11-17 14:37:09.543053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.478 [2024-11-17 14:37:09.543060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.478 [2024-11-17 14:37:09.543066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.478 [2024-11-17 14:37:09.543081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.478 qpair failed and we were unable to recover it. 00:27:20.478 [2024-11-17 14:37:09.552960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.478 [2024-11-17 14:37:09.553052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.478 [2024-11-17 14:37:09.553065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.478 [2024-11-17 14:37:09.553072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.478 [2024-11-17 14:37:09.553078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.478 [2024-11-17 14:37:09.553093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.478 qpair failed and we were unable to recover it. 00:27:20.478 [2024-11-17 14:37:09.562898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.478 [2024-11-17 14:37:09.562962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.478 [2024-11-17 14:37:09.562976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.478 [2024-11-17 14:37:09.562987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.478 [2024-11-17 14:37:09.562994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.478 [2024-11-17 14:37:09.563009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.478 qpair failed and we were unable to recover it. 00:27:20.478 [2024-11-17 14:37:09.573003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.479 [2024-11-17 14:37:09.573058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.479 [2024-11-17 14:37:09.573071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.479 [2024-11-17 14:37:09.573078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.479 [2024-11-17 14:37:09.573085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.479 [2024-11-17 14:37:09.573099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.479 qpair failed and we were unable to recover it. 00:27:20.479 [2024-11-17 14:37:09.583050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.479 [2024-11-17 14:37:09.583103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.479 [2024-11-17 14:37:09.583117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.479 [2024-11-17 14:37:09.583123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.479 [2024-11-17 14:37:09.583130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.479 [2024-11-17 14:37:09.583145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.479 qpair failed and we were unable to recover it. 00:27:20.479 [2024-11-17 14:37:09.593105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.479 [2024-11-17 14:37:09.593163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.479 [2024-11-17 14:37:09.593177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.479 [2024-11-17 14:37:09.593184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.479 [2024-11-17 14:37:09.593191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.479 [2024-11-17 14:37:09.593206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.479 qpair failed and we were unable to recover it. 00:27:20.479 [2024-11-17 14:37:09.603100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.479 [2024-11-17 14:37:09.603156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.479 [2024-11-17 14:37:09.603170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.479 [2024-11-17 14:37:09.603176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.479 [2024-11-17 14:37:09.603183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.479 [2024-11-17 14:37:09.603202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.479 qpair failed and we were unable to recover it. 00:27:20.479 [2024-11-17 14:37:09.613127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.479 [2024-11-17 14:37:09.613178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.479 [2024-11-17 14:37:09.613192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.479 [2024-11-17 14:37:09.613199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.479 [2024-11-17 14:37:09.613205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.479 [2024-11-17 14:37:09.613220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.479 qpair failed and we were unable to recover it. 00:27:20.479 [2024-11-17 14:37:09.623176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.479 [2024-11-17 14:37:09.623233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.479 [2024-11-17 14:37:09.623247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.479 [2024-11-17 14:37:09.623254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.479 [2024-11-17 14:37:09.623260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.479 [2024-11-17 14:37:09.623275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.479 qpair failed and we were unable to recover it. 00:27:20.479 [2024-11-17 14:37:09.633173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.479 [2024-11-17 14:37:09.633227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.479 [2024-11-17 14:37:09.633240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.479 [2024-11-17 14:37:09.633247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.479 [2024-11-17 14:37:09.633253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.479 [2024-11-17 14:37:09.633270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.479 qpair failed and we were unable to recover it. 00:27:20.479 [2024-11-17 14:37:09.643210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.479 [2024-11-17 14:37:09.643276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.479 [2024-11-17 14:37:09.643291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.479 [2024-11-17 14:37:09.643298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.479 [2024-11-17 14:37:09.643304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.479 [2024-11-17 14:37:09.643320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.479 qpair failed and we were unable to recover it. 00:27:20.479 [2024-11-17 14:37:09.653235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.479 [2024-11-17 14:37:09.653289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.479 [2024-11-17 14:37:09.653304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.479 [2024-11-17 14:37:09.653311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.479 [2024-11-17 14:37:09.653317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.479 [2024-11-17 14:37:09.653333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.479 qpair failed and we were unable to recover it. 00:27:20.479 [2024-11-17 14:37:09.663319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.479 [2024-11-17 14:37:09.663426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.479 [2024-11-17 14:37:09.663440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.479 [2024-11-17 14:37:09.663447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.479 [2024-11-17 14:37:09.663453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.479 [2024-11-17 14:37:09.663470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.479 qpair failed and we were unable to recover it. 00:27:20.479 [2024-11-17 14:37:09.673291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.479 [2024-11-17 14:37:09.673349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.479 [2024-11-17 14:37:09.673366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.479 [2024-11-17 14:37:09.673374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.479 [2024-11-17 14:37:09.673380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.479 [2024-11-17 14:37:09.673396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.479 qpair failed and we were unable to recover it. 00:27:20.479 [2024-11-17 14:37:09.683361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.479 [2024-11-17 14:37:09.683415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.479 [2024-11-17 14:37:09.683429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.479 [2024-11-17 14:37:09.683436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.479 [2024-11-17 14:37:09.683443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.479 [2024-11-17 14:37:09.683459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.479 qpair failed and we were unable to recover it. 00:27:20.479 [2024-11-17 14:37:09.693285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.479 [2024-11-17 14:37:09.693368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.479 [2024-11-17 14:37:09.693385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.479 [2024-11-17 14:37:09.693392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.479 [2024-11-17 14:37:09.693399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.479 [2024-11-17 14:37:09.693414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.479 qpair failed and we were unable to recover it. 00:27:20.739 [2024-11-17 14:37:09.703390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.739 [2024-11-17 14:37:09.703446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.739 [2024-11-17 14:37:09.703460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.739 [2024-11-17 14:37:09.703467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.740 [2024-11-17 14:37:09.703473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.740 [2024-11-17 14:37:09.703490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.740 qpair failed and we were unable to recover it. 00:27:20.740 [2024-11-17 14:37:09.713440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.740 [2024-11-17 14:37:09.713497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.740 [2024-11-17 14:37:09.713511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.740 [2024-11-17 14:37:09.713518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.740 [2024-11-17 14:37:09.713524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.740 [2024-11-17 14:37:09.713539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.740 qpair failed and we were unable to recover it. 00:27:20.740 [2024-11-17 14:37:09.723419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.740 [2024-11-17 14:37:09.723472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.740 [2024-11-17 14:37:09.723486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.740 [2024-11-17 14:37:09.723493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.740 [2024-11-17 14:37:09.723500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.740 [2024-11-17 14:37:09.723516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.740 qpair failed and we were unable to recover it. 00:27:20.740 [2024-11-17 14:37:09.733489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.740 [2024-11-17 14:37:09.733545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.740 [2024-11-17 14:37:09.733559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.740 [2024-11-17 14:37:09.733566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.740 [2024-11-17 14:37:09.733573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.740 [2024-11-17 14:37:09.733596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.740 qpair failed and we were unable to recover it. 00:27:20.740 [2024-11-17 14:37:09.743498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.740 [2024-11-17 14:37:09.743571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.740 [2024-11-17 14:37:09.743586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.740 [2024-11-17 14:37:09.743593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.740 [2024-11-17 14:37:09.743599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.740 [2024-11-17 14:37:09.743615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.740 qpair failed and we were unable to recover it. 00:27:20.740 [2024-11-17 14:37:09.753514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.740 [2024-11-17 14:37:09.753571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.740 [2024-11-17 14:37:09.753585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.740 [2024-11-17 14:37:09.753592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.740 [2024-11-17 14:37:09.753599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.740 [2024-11-17 14:37:09.753614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.740 qpair failed and we were unable to recover it. 00:27:20.740 [2024-11-17 14:37:09.763465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.740 [2024-11-17 14:37:09.763530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.740 [2024-11-17 14:37:09.763543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.740 [2024-11-17 14:37:09.763551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.740 [2024-11-17 14:37:09.763557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.740 [2024-11-17 14:37:09.763572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.740 qpair failed and we were unable to recover it. 00:27:20.740 [2024-11-17 14:37:09.773574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.740 [2024-11-17 14:37:09.773630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.740 [2024-11-17 14:37:09.773644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.740 [2024-11-17 14:37:09.773651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.740 [2024-11-17 14:37:09.773658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.740 [2024-11-17 14:37:09.773672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.740 qpair failed and we were unable to recover it. 00:27:20.740 [2024-11-17 14:37:09.783537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.740 [2024-11-17 14:37:09.783594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.740 [2024-11-17 14:37:09.783608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.740 [2024-11-17 14:37:09.783615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.740 [2024-11-17 14:37:09.783623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.740 [2024-11-17 14:37:09.783638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.740 qpair failed and we were unable to recover it. 00:27:20.740 [2024-11-17 14:37:09.793610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.740 [2024-11-17 14:37:09.793669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.740 [2024-11-17 14:37:09.793682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.740 [2024-11-17 14:37:09.793690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.740 [2024-11-17 14:37:09.793696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.740 [2024-11-17 14:37:09.793711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.740 qpair failed and we were unable to recover it. 00:27:20.740 [2024-11-17 14:37:09.803612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.740 [2024-11-17 14:37:09.803669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.740 [2024-11-17 14:37:09.803683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.740 [2024-11-17 14:37:09.803690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.740 [2024-11-17 14:37:09.803697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.740 [2024-11-17 14:37:09.803712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.740 qpair failed and we were unable to recover it. 00:27:20.740 [2024-11-17 14:37:09.813650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.740 [2024-11-17 14:37:09.813725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.740 [2024-11-17 14:37:09.813739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.740 [2024-11-17 14:37:09.813747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.740 [2024-11-17 14:37:09.813753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.740 [2024-11-17 14:37:09.813768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.740 qpair failed and we were unable to recover it. 00:27:20.740 [2024-11-17 14:37:09.823770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.740 [2024-11-17 14:37:09.823828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.740 [2024-11-17 14:37:09.823844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.740 [2024-11-17 14:37:09.823851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.740 [2024-11-17 14:37:09.823858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.740 [2024-11-17 14:37:09.823873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.740 qpair failed and we were unable to recover it. 00:27:20.740 [2024-11-17 14:37:09.833711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.740 [2024-11-17 14:37:09.833769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.741 [2024-11-17 14:37:09.833783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.741 [2024-11-17 14:37:09.833791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.741 [2024-11-17 14:37:09.833797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.741 [2024-11-17 14:37:09.833813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.741 qpair failed and we were unable to recover it. 00:27:20.741 [2024-11-17 14:37:09.843769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.741 [2024-11-17 14:37:09.843825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.741 [2024-11-17 14:37:09.843839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.741 [2024-11-17 14:37:09.843846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.741 [2024-11-17 14:37:09.843853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.741 [2024-11-17 14:37:09.843869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.741 qpair failed and we were unable to recover it. 00:27:20.741 [2024-11-17 14:37:09.853762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.741 [2024-11-17 14:37:09.853858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.741 [2024-11-17 14:37:09.853871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.741 [2024-11-17 14:37:09.853879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.741 [2024-11-17 14:37:09.853884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.741 [2024-11-17 14:37:09.853900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.741 qpair failed and we were unable to recover it. 00:27:20.741 [2024-11-17 14:37:09.863841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.741 [2024-11-17 14:37:09.863900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.741 [2024-11-17 14:37:09.863914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.741 [2024-11-17 14:37:09.863921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.741 [2024-11-17 14:37:09.863931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.741 [2024-11-17 14:37:09.863945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.741 qpair failed and we were unable to recover it. 00:27:20.741 [2024-11-17 14:37:09.873864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.741 [2024-11-17 14:37:09.873919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.741 [2024-11-17 14:37:09.873932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.741 [2024-11-17 14:37:09.873939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.741 [2024-11-17 14:37:09.873946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.741 [2024-11-17 14:37:09.873960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.741 qpair failed and we were unable to recover it. 00:27:20.741 [2024-11-17 14:37:09.883885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.741 [2024-11-17 14:37:09.883938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.741 [2024-11-17 14:37:09.883952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.741 [2024-11-17 14:37:09.883959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.741 [2024-11-17 14:37:09.883965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.741 [2024-11-17 14:37:09.883980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.741 qpair failed and we were unable to recover it. 00:27:20.741 [2024-11-17 14:37:09.893910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.741 [2024-11-17 14:37:09.893964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.741 [2024-11-17 14:37:09.893978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.741 [2024-11-17 14:37:09.893985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.741 [2024-11-17 14:37:09.893992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.741 [2024-11-17 14:37:09.894007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.741 qpair failed and we were unable to recover it. 00:27:20.741 [2024-11-17 14:37:09.903978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.741 [2024-11-17 14:37:09.904054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.741 [2024-11-17 14:37:09.904067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.741 [2024-11-17 14:37:09.904075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.741 [2024-11-17 14:37:09.904081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.741 [2024-11-17 14:37:09.904096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.741 qpair failed and we were unable to recover it. 00:27:20.741 [2024-11-17 14:37:09.913965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.741 [2024-11-17 14:37:09.914014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.741 [2024-11-17 14:37:09.914028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.741 [2024-11-17 14:37:09.914034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.741 [2024-11-17 14:37:09.914040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.741 [2024-11-17 14:37:09.914056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.741 qpair failed and we were unable to recover it. 00:27:20.741 [2024-11-17 14:37:09.923987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.741 [2024-11-17 14:37:09.924048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.741 [2024-11-17 14:37:09.924062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.741 [2024-11-17 14:37:09.924070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.741 [2024-11-17 14:37:09.924077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.741 [2024-11-17 14:37:09.924092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.741 qpair failed and we were unable to recover it. 00:27:20.741 [2024-11-17 14:37:09.934044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.741 [2024-11-17 14:37:09.934120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.741 [2024-11-17 14:37:09.934134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.741 [2024-11-17 14:37:09.934141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.741 [2024-11-17 14:37:09.934147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.741 [2024-11-17 14:37:09.934162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.741 qpair failed and we were unable to recover it. 00:27:20.741 [2024-11-17 14:37:09.944060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.741 [2024-11-17 14:37:09.944116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.741 [2024-11-17 14:37:09.944130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.741 [2024-11-17 14:37:09.944137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.741 [2024-11-17 14:37:09.944144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.741 [2024-11-17 14:37:09.944159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.741 qpair failed and we were unable to recover it. 00:27:20.741 [2024-11-17 14:37:09.954135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.741 [2024-11-17 14:37:09.954192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.741 [2024-11-17 14:37:09.954209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.741 [2024-11-17 14:37:09.954216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.741 [2024-11-17 14:37:09.954222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:20.741 [2024-11-17 14:37:09.954237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.741 qpair failed and we were unable to recover it. 00:27:21.002 [2024-11-17 14:37:09.964105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.002 [2024-11-17 14:37:09.964160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.002 [2024-11-17 14:37:09.964175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.002 [2024-11-17 14:37:09.964183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.002 [2024-11-17 14:37:09.964189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.002 [2024-11-17 14:37:09.964205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.002 qpair failed and we were unable to recover it. 00:27:21.002 [2024-11-17 14:37:09.974191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.002 [2024-11-17 14:37:09.974249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.002 [2024-11-17 14:37:09.974263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.002 [2024-11-17 14:37:09.974270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.002 [2024-11-17 14:37:09.974276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.002 [2024-11-17 14:37:09.974292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.002 qpair failed and we were unable to recover it. 00:27:21.002 [2024-11-17 14:37:09.984199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.002 [2024-11-17 14:37:09.984257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.002 [2024-11-17 14:37:09.984271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.002 [2024-11-17 14:37:09.984278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.002 [2024-11-17 14:37:09.984284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.002 [2024-11-17 14:37:09.984299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.002 qpair failed and we were unable to recover it. 00:27:21.002 [2024-11-17 14:37:09.994198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.002 [2024-11-17 14:37:09.994253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.002 [2024-11-17 14:37:09.994267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.002 [2024-11-17 14:37:09.994277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.002 [2024-11-17 14:37:09.994284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.002 [2024-11-17 14:37:09.994300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.002 qpair failed and we were unable to recover it. 00:27:21.003 [2024-11-17 14:37:10.004213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.003 [2024-11-17 14:37:10.004281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.003 [2024-11-17 14:37:10.004305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.003 [2024-11-17 14:37:10.004318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.003 [2024-11-17 14:37:10.004329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.003 [2024-11-17 14:37:10.004362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.003 qpair failed and we were unable to recover it. 00:27:21.003 [2024-11-17 14:37:10.014283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.003 [2024-11-17 14:37:10.014358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.003 [2024-11-17 14:37:10.014377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.003 [2024-11-17 14:37:10.014386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.003 [2024-11-17 14:37:10.014393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.003 [2024-11-17 14:37:10.014410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.003 qpair failed and we were unable to recover it. 00:27:21.003 [2024-11-17 14:37:10.024291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.003 [2024-11-17 14:37:10.024364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.003 [2024-11-17 14:37:10.024380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.003 [2024-11-17 14:37:10.024387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.003 [2024-11-17 14:37:10.024394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.003 [2024-11-17 14:37:10.024410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.003 qpair failed and we were unable to recover it. 00:27:21.003 [2024-11-17 14:37:10.034317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.003 [2024-11-17 14:37:10.034376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.003 [2024-11-17 14:37:10.034391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.003 [2024-11-17 14:37:10.034398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.003 [2024-11-17 14:37:10.034405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.003 [2024-11-17 14:37:10.034421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.003 qpair failed and we were unable to recover it. 00:27:21.003 [2024-11-17 14:37:10.044364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.003 [2024-11-17 14:37:10.044451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.003 [2024-11-17 14:37:10.044498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.003 [2024-11-17 14:37:10.044515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.003 [2024-11-17 14:37:10.044535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.003 [2024-11-17 14:37:10.044595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.003 qpair failed and we were unable to recover it. 00:27:21.003 [2024-11-17 14:37:10.054380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.003 [2024-11-17 14:37:10.054434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.003 [2024-11-17 14:37:10.054449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.003 [2024-11-17 14:37:10.054457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.003 [2024-11-17 14:37:10.054464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.003 [2024-11-17 14:37:10.054479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.003 qpair failed and we were unable to recover it. 00:27:21.003 [2024-11-17 14:37:10.064410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.003 [2024-11-17 14:37:10.064508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.003 [2024-11-17 14:37:10.064523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.003 [2024-11-17 14:37:10.064530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.003 [2024-11-17 14:37:10.064537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.003 [2024-11-17 14:37:10.064553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.003 qpair failed and we were unable to recover it. 00:27:21.003 [2024-11-17 14:37:10.074369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.003 [2024-11-17 14:37:10.074426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.003 [2024-11-17 14:37:10.074442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.003 [2024-11-17 14:37:10.074450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.003 [2024-11-17 14:37:10.074457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.003 [2024-11-17 14:37:10.074473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.003 qpair failed and we were unable to recover it. 00:27:21.003 [2024-11-17 14:37:10.084452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.003 [2024-11-17 14:37:10.084505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.003 [2024-11-17 14:37:10.084519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.003 [2024-11-17 14:37:10.084526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.003 [2024-11-17 14:37:10.084533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.003 [2024-11-17 14:37:10.084548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.003 qpair failed and we were unable to recover it. 00:27:21.003 [2024-11-17 14:37:10.094508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.003 [2024-11-17 14:37:10.094610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.003 [2024-11-17 14:37:10.094624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.003 [2024-11-17 14:37:10.094632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.003 [2024-11-17 14:37:10.094638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.003 [2024-11-17 14:37:10.094654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.003 qpair failed and we were unable to recover it. 00:27:21.003 [2024-11-17 14:37:10.104537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.003 [2024-11-17 14:37:10.104612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.003 [2024-11-17 14:37:10.104626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.003 [2024-11-17 14:37:10.104633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.003 [2024-11-17 14:37:10.104640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.003 [2024-11-17 14:37:10.104655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.003 qpair failed and we were unable to recover it. 00:27:21.003 [2024-11-17 14:37:10.114502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.003 [2024-11-17 14:37:10.114558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.003 [2024-11-17 14:37:10.114572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.003 [2024-11-17 14:37:10.114579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.003 [2024-11-17 14:37:10.114586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.003 [2024-11-17 14:37:10.114601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.003 qpair failed and we were unable to recover it. 00:27:21.003 [2024-11-17 14:37:10.124573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.003 [2024-11-17 14:37:10.124624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.003 [2024-11-17 14:37:10.124638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.003 [2024-11-17 14:37:10.124648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.003 [2024-11-17 14:37:10.124655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.003 [2024-11-17 14:37:10.124670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.003 qpair failed and we were unable to recover it. 00:27:21.004 [2024-11-17 14:37:10.134609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.004 [2024-11-17 14:37:10.134666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.004 [2024-11-17 14:37:10.134680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.004 [2024-11-17 14:37:10.134688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.004 [2024-11-17 14:37:10.134695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.004 [2024-11-17 14:37:10.134711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.004 qpair failed and we were unable to recover it. 00:27:21.004 [2024-11-17 14:37:10.144639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.004 [2024-11-17 14:37:10.144708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.004 [2024-11-17 14:37:10.144722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.004 [2024-11-17 14:37:10.144729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.004 [2024-11-17 14:37:10.144736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.004 [2024-11-17 14:37:10.144751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.004 qpair failed and we were unable to recover it. 00:27:21.004 [2024-11-17 14:37:10.154668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.004 [2024-11-17 14:37:10.154726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.004 [2024-11-17 14:37:10.154740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.004 [2024-11-17 14:37:10.154747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.004 [2024-11-17 14:37:10.154754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.004 [2024-11-17 14:37:10.154769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.004 qpair failed and we were unable to recover it. 00:27:21.004 [2024-11-17 14:37:10.164690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.004 [2024-11-17 14:37:10.164747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.004 [2024-11-17 14:37:10.164761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.004 [2024-11-17 14:37:10.164769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.004 [2024-11-17 14:37:10.164775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.004 [2024-11-17 14:37:10.164795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.004 qpair failed and we were unable to recover it. 00:27:21.004 [2024-11-17 14:37:10.174734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.004 [2024-11-17 14:37:10.174790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.004 [2024-11-17 14:37:10.174803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.004 [2024-11-17 14:37:10.174810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.004 [2024-11-17 14:37:10.174817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.004 [2024-11-17 14:37:10.174832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.004 qpair failed and we were unable to recover it. 00:27:21.004 [2024-11-17 14:37:10.184775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.004 [2024-11-17 14:37:10.184837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.004 [2024-11-17 14:37:10.184851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.004 [2024-11-17 14:37:10.184858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.004 [2024-11-17 14:37:10.184865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.004 [2024-11-17 14:37:10.184880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.004 qpair failed and we were unable to recover it. 00:27:21.004 [2024-11-17 14:37:10.194782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.004 [2024-11-17 14:37:10.194887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.004 [2024-11-17 14:37:10.194901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.004 [2024-11-17 14:37:10.194909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.004 [2024-11-17 14:37:10.194916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.004 [2024-11-17 14:37:10.194931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.004 qpair failed and we were unable to recover it. 00:27:21.004 [2024-11-17 14:37:10.204820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.004 [2024-11-17 14:37:10.204926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.004 [2024-11-17 14:37:10.204940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.004 [2024-11-17 14:37:10.204948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.004 [2024-11-17 14:37:10.204955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.004 [2024-11-17 14:37:10.204970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.004 qpair failed and we were unable to recover it. 00:27:21.004 [2024-11-17 14:37:10.214835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.004 [2024-11-17 14:37:10.214885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.004 [2024-11-17 14:37:10.214899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.004 [2024-11-17 14:37:10.214906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.004 [2024-11-17 14:37:10.214913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.004 [2024-11-17 14:37:10.214929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.004 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-17 14:37:10.224897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.263 [2024-11-17 14:37:10.224968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.263 [2024-11-17 14:37:10.224982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.263 [2024-11-17 14:37:10.224990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.263 [2024-11-17 14:37:10.224997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5198000b90 00:27:21.263 [2024-11-17 14:37:10.225012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-17 14:37:10.234924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.263 [2024-11-17 14:37:10.235031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.263 [2024-11-17 14:37:10.235089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.263 [2024-11-17 14:37:10.235115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.263 [2024-11-17 14:37:10.235138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5190000b90 00:27:21.263 [2024-11-17 14:37:10.235191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-17 14:37:10.244971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.263 [2024-11-17 14:37:10.245046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.263 [2024-11-17 14:37:10.245073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.263 [2024-11-17 14:37:10.245087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.263 [2024-11-17 14:37:10.245101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5190000b90 00:27:21.263 [2024-11-17 14:37:10.245133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-17 14:37:10.245251] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:21.264 A controller has encountered a failure and is being reset. 00:27:21.264 Controller properly reset. 00:27:21.264 Initializing NVMe Controllers 00:27:21.264 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:21.264 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:21.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:21.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:21.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:21.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:21.264 Initialization complete. Launching workers. 00:27:21.264 Starting thread on core 1 00:27:21.264 Starting thread on core 2 00:27:21.264 Starting thread on core 3 00:27:21.264 Starting thread on core 0 00:27:21.264 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:21.264 00:27:21.264 real 0m10.844s 00:27:21.264 user 0m19.336s 00:27:21.264 sys 0m4.675s 00:27:21.264 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:21.264 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.264 ************************************ 00:27:21.264 END TEST nvmf_target_disconnect_tc2 00:27:21.264 ************************************ 00:27:21.264 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:21.264 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:21.264 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:21.264 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:21.264 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:21.264 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:21.264 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:21.264 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:21.264 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:21.264 rmmod nvme_tcp 00:27:21.264 rmmod nvme_fabrics 00:27:21.523 rmmod nvme_keyring 00:27:21.523 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:21.523 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:21.523 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:21.523 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1623171 ']' 00:27:21.523 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1623171 00:27:21.523 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1623171 ']' 00:27:21.523 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1623171 00:27:21.523 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:21.523 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:21.523 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1623171 00:27:21.523 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:21.523 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:21.523 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1623171' 00:27:21.523 killing process with pid 1623171 00:27:21.523 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1623171 00:27:21.523 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1623171 00:27:21.783 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:21.783 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:21.783 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:21.783 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:21.783 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:21.783 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:21.783 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:21.783 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:21.783 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:21.783 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.783 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.783 14:37:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.690 14:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:23.690 00:27:23.690 real 0m19.671s 00:27:23.690 user 0m47.212s 00:27:23.690 sys 0m9.623s 00:27:23.690 14:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:23.691 14:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:23.691 ************************************ 00:27:23.691 END TEST nvmf_target_disconnect 00:27:23.691 ************************************ 00:27:23.691 14:37:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:23.691 00:27:23.691 real 5m52.805s 00:27:23.691 user 10m36.824s 00:27:23.691 sys 1m59.065s 00:27:23.691 14:37:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:23.691 14:37:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.691 ************************************ 00:27:23.691 END TEST nvmf_host 00:27:23.691 ************************************ 00:27:23.951 14:37:12 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:23.951 14:37:12 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:23.951 14:37:12 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:23.951 14:37:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:23.951 14:37:12 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:23.951 14:37:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:23.951 ************************************ 00:27:23.951 START TEST nvmf_target_core_interrupt_mode 00:27:23.951 ************************************ 00:27:23.951 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:23.951 * Looking for test storage... 00:27:23.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:23.951 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:23.951 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:27:23.951 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:23.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.952 --rc genhtml_branch_coverage=1 00:27:23.952 --rc genhtml_function_coverage=1 00:27:23.952 --rc genhtml_legend=1 00:27:23.952 --rc geninfo_all_blocks=1 00:27:23.952 --rc geninfo_unexecuted_blocks=1 00:27:23.952 00:27:23.952 ' 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:23.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.952 --rc genhtml_branch_coverage=1 00:27:23.952 --rc genhtml_function_coverage=1 00:27:23.952 --rc genhtml_legend=1 00:27:23.952 --rc geninfo_all_blocks=1 00:27:23.952 --rc geninfo_unexecuted_blocks=1 00:27:23.952 00:27:23.952 ' 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:23.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.952 --rc genhtml_branch_coverage=1 00:27:23.952 --rc genhtml_function_coverage=1 00:27:23.952 --rc genhtml_legend=1 00:27:23.952 --rc geninfo_all_blocks=1 00:27:23.952 --rc geninfo_unexecuted_blocks=1 00:27:23.952 00:27:23.952 ' 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:23.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.952 --rc genhtml_branch_coverage=1 00:27:23.952 --rc genhtml_function_coverage=1 00:27:23.952 --rc genhtml_legend=1 00:27:23.952 --rc geninfo_all_blocks=1 00:27:23.952 --rc geninfo_unexecuted_blocks=1 00:27:23.952 00:27:23.952 ' 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:23.952 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:24.213 ************************************ 00:27:24.213 START TEST nvmf_abort 00:27:24.213 ************************************ 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:24.213 * Looking for test storage... 00:27:24.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:24.213 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:24.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.214 --rc genhtml_branch_coverage=1 00:27:24.214 --rc genhtml_function_coverage=1 00:27:24.214 --rc genhtml_legend=1 00:27:24.214 --rc geninfo_all_blocks=1 00:27:24.214 --rc geninfo_unexecuted_blocks=1 00:27:24.214 00:27:24.214 ' 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:24.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.214 --rc genhtml_branch_coverage=1 00:27:24.214 --rc genhtml_function_coverage=1 00:27:24.214 --rc genhtml_legend=1 00:27:24.214 --rc geninfo_all_blocks=1 00:27:24.214 --rc geninfo_unexecuted_blocks=1 00:27:24.214 00:27:24.214 ' 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:24.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.214 --rc genhtml_branch_coverage=1 00:27:24.214 --rc genhtml_function_coverage=1 00:27:24.214 --rc genhtml_legend=1 00:27:24.214 --rc geninfo_all_blocks=1 00:27:24.214 --rc geninfo_unexecuted_blocks=1 00:27:24.214 00:27:24.214 ' 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:24.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.214 --rc genhtml_branch_coverage=1 00:27:24.214 --rc genhtml_function_coverage=1 00:27:24.214 --rc genhtml_legend=1 00:27:24.214 --rc geninfo_all_blocks=1 00:27:24.214 --rc geninfo_unexecuted_blocks=1 00:27:24.214 00:27:24.214 ' 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:24.214 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:30.788 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:30.789 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:30.789 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:30.789 Found net devices under 0000:86:00.0: cvl_0_0 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:30.789 Found net devices under 0000:86:00.1: cvl_0_1 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:30.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:30.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:27:30.789 00:27:30.789 --- 10.0.0.2 ping statistics --- 00:27:30.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.789 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:30.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:30.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:27:30.789 00:27:30.789 --- 10.0.0.1 ping statistics --- 00:27:30.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.789 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1627713 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1627713 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1627713 ']' 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.789 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.790 [2024-11-17 14:37:19.372286] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:30.790 [2024-11-17 14:37:19.373309] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:27:30.790 [2024-11-17 14:37:19.373350] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.790 [2024-11-17 14:37:19.454874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:30.790 [2024-11-17 14:37:19.497144] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.790 [2024-11-17 14:37:19.497181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.790 [2024-11-17 14:37:19.497189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:30.790 [2024-11-17 14:37:19.497196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:30.790 [2024-11-17 14:37:19.497201] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.790 [2024-11-17 14:37:19.498587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:30.790 [2024-11-17 14:37:19.498711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.790 [2024-11-17 14:37:19.498711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:30.790 [2024-11-17 14:37:19.565981] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:30.790 [2024-11-17 14:37:19.566767] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:30.790 [2024-11-17 14:37:19.567012] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:30.790 [2024-11-17 14:37:19.567161] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.790 [2024-11-17 14:37:19.639437] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.790 Malloc0 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.790 Delay0 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.790 [2024-11-17 14:37:19.735347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.790 14:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:30.790 [2024-11-17 14:37:19.905514] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:33.323 Initializing NVMe Controllers 00:27:33.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:33.323 controller IO queue size 128 less than required 00:27:33.323 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:33.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:33.323 Initialization complete. Launching workers. 00:27:33.323 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36910 00:27:33.323 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36967, failed to submit 66 00:27:33.323 success 36910, unsuccessful 57, failed 0 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:33.323 rmmod nvme_tcp 00:27:33.323 rmmod nvme_fabrics 00:27:33.323 rmmod nvme_keyring 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1627713 ']' 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1627713 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1627713 ']' 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1627713 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1627713 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1627713' 00:27:33.323 killing process with pid 1627713 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1627713 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1627713 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:33.323 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:33.324 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:33.324 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:33.324 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:33.324 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:33.324 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:33.324 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.324 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.324 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.231 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:35.231 00:27:35.231 real 0m11.253s 00:27:35.231 user 0m10.816s 00:27:35.231 sys 0m5.724s 00:27:35.231 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:35.231 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:35.231 ************************************ 00:27:35.231 END TEST nvmf_abort 00:27:35.231 ************************************ 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:35.491 ************************************ 00:27:35.491 START TEST nvmf_ns_hotplug_stress 00:27:35.491 ************************************ 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:35.491 * Looking for test storage... 00:27:35.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:35.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.491 --rc genhtml_branch_coverage=1 00:27:35.491 --rc genhtml_function_coverage=1 00:27:35.491 --rc genhtml_legend=1 00:27:35.491 --rc geninfo_all_blocks=1 00:27:35.491 --rc geninfo_unexecuted_blocks=1 00:27:35.491 00:27:35.491 ' 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:35.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.491 --rc genhtml_branch_coverage=1 00:27:35.491 --rc genhtml_function_coverage=1 00:27:35.491 --rc genhtml_legend=1 00:27:35.491 --rc geninfo_all_blocks=1 00:27:35.491 --rc geninfo_unexecuted_blocks=1 00:27:35.491 00:27:35.491 ' 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:35.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.491 --rc genhtml_branch_coverage=1 00:27:35.491 --rc genhtml_function_coverage=1 00:27:35.491 --rc genhtml_legend=1 00:27:35.491 --rc geninfo_all_blocks=1 00:27:35.491 --rc geninfo_unexecuted_blocks=1 00:27:35.491 00:27:35.491 ' 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:35.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.491 --rc genhtml_branch_coverage=1 00:27:35.491 --rc genhtml_function_coverage=1 00:27:35.491 --rc genhtml_legend=1 00:27:35.491 --rc geninfo_all_blocks=1 00:27:35.491 --rc geninfo_unexecuted_blocks=1 00:27:35.491 00:27:35.491 ' 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.491 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:35.751 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:35.751 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:35.751 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.751 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:35.751 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:35.751 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:35.751 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:35.752 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:42.381 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:42.381 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:42.381 Found net devices under 0000:86:00.0: cvl_0_0 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:42.381 Found net devices under 0000:86:00.1: cvl_0_1 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:42.381 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:42.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:42.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:27:42.382 00:27:42.382 --- 10.0.0.2 ping statistics --- 00:27:42.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.382 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:42.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:42.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:27:42.382 00:27:42.382 --- 10.0.0.1 ping statistics --- 00:27:42.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.382 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1631711 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1631711 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1631711 ']' 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:42.382 [2024-11-17 14:37:30.737973] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:42.382 [2024-11-17 14:37:30.738900] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:27:42.382 [2024-11-17 14:37:30.738932] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:42.382 [2024-11-17 14:37:30.817278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:42.382 [2024-11-17 14:37:30.859454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:42.382 [2024-11-17 14:37:30.859491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:42.382 [2024-11-17 14:37:30.859498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:42.382 [2024-11-17 14:37:30.859504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:42.382 [2024-11-17 14:37:30.859509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:42.382 [2024-11-17 14:37:30.860838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:42.382 [2024-11-17 14:37:30.860947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.382 [2024-11-17 14:37:30.860948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:42.382 [2024-11-17 14:37:30.927085] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:42.382 [2024-11-17 14:37:30.927874] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:42.382 [2024-11-17 14:37:30.928147] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:42.382 [2024-11-17 14:37:30.928292] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:42.382 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:42.382 [2024-11-17 14:37:31.161743] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.382 14:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:42.382 14:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.382 [2024-11-17 14:37:31.554166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.382 14:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:42.642 14:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:42.901 Malloc0 00:27:42.901 14:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:43.160 Delay0 00:27:43.160 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:43.160 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:43.420 NULL1 00:27:43.420 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:43.678 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1632122 00:27:43.678 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:43.678 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:43.678 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:43.938 14:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.197 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:44.197 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:44.197 true 00:27:44.456 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:44.456 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:44.456 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.715 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:44.715 14:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:44.975 true 00:27:44.975 14:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:44.975 14:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:45.233 14:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:45.492 14:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:45.492 14:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:45.492 true 00:27:45.751 14:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:45.751 14:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:45.751 14:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:46.010 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:46.010 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:46.269 true 00:27:46.269 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:46.269 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:46.528 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:46.788 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:46.788 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:46.788 true 00:27:47.048 14:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:47.048 14:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:47.048 14:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:47.308 14:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:47.308 14:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:47.567 true 00:27:47.567 14:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:47.567 14:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:47.826 14:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:48.084 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:48.085 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:48.343 true 00:27:48.344 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:48.344 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.344 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:48.602 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:48.602 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:48.861 true 00:27:48.861 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:48.861 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.120 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:49.379 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:49.379 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:49.638 true 00:27:49.638 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:49.638 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.638 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:49.897 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:49.897 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:50.156 true 00:27:50.156 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:50.156 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.415 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.674 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:50.674 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:50.933 true 00:27:50.933 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:50.933 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.933 14:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:51.192 14:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:51.192 14:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:51.450 true 00:27:51.450 14:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:51.450 14:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.709 14:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:51.967 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:51.967 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:52.226 true 00:27:52.226 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:52.226 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:52.226 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.485 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:52.485 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:52.743 true 00:27:52.743 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:52.743 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:53.002 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:53.261 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:53.261 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:53.521 true 00:27:53.521 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:53.521 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:53.521 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:53.781 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:53.781 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:54.040 true 00:27:54.040 14:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:54.040 14:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.299 14:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.557 14:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:54.557 14:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:54.816 true 00:27:54.816 14:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:54.816 14:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.075 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.075 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:55.075 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:55.334 true 00:27:55.334 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:55.334 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.593 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.852 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:55.852 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:56.111 true 00:27:56.111 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:56.111 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:56.370 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:56.370 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:27:56.370 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:27:56.629 true 00:27:56.629 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:56.629 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:56.887 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.146 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:27:57.146 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:27:57.405 true 00:27:57.405 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:57.405 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.664 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.664 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:27:57.664 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:27:57.922 true 00:27:57.923 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:57.923 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.181 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:27:58.440 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:27:58.699 true 00:27:58.699 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:58.699 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.957 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.216 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:27:59.216 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:27:59.216 true 00:27:59.216 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:59.216 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.475 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.734 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:27:59.734 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:27:59.993 true 00:27:59.993 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:27:59.993 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.251 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.510 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:00.510 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:00.510 true 00:28:00.510 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:00.510 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.770 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.029 14:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:01.029 14:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:01.287 true 00:28:01.287 14:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:01.288 14:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.547 14:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.806 14:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:01.806 14:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:01.806 true 00:28:01.806 14:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:01.806 14:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.065 14:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.324 14:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:28:02.324 14:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:28:02.583 true 00:28:02.583 14:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:02.583 14:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.842 14:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.102 14:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:28:03.102 14:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:28:03.102 true 00:28:03.102 14:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:03.102 14:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.361 14:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.620 14:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:28:03.620 14:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:28:03.879 true 00:28:03.879 14:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:03.879 14:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.138 14:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.397 14:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:28:04.397 14:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:28:04.397 true 00:28:04.397 14:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:04.398 14:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.656 14:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.915 14:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:28:04.915 14:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:28:05.174 true 00:28:05.174 14:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:05.174 14:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.432 14:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.691 14:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:28:05.691 14:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:28:05.691 true 00:28:05.692 14:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:05.692 14:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.950 14:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:06.209 14:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:28:06.209 14:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:28:06.468 true 00:28:06.468 14:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:06.468 14:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.727 14:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:06.986 14:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:28:06.986 14:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:28:06.986 true 00:28:06.986 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:06.986 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.245 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.504 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:28:07.504 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:28:07.816 true 00:28:07.816 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:07.816 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.114 14:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:08.114 14:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:28:08.114 14:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:28:08.389 true 00:28:08.389 14:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:08.389 14:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.648 14:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:08.907 14:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:28:08.907 14:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:28:08.907 true 00:28:08.907 14:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:08.907 14:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.165 14:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:09.424 14:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:28:09.424 14:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:28:09.683 true 00:28:09.683 14:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:09.683 14:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.942 14:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.213 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:28:10.213 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:28:10.213 true 00:28:10.213 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:10.213 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.477 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.735 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:28:10.736 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:28:10.994 true 00:28:10.994 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:10.994 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.252 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:11.512 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:28:11.512 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:28:11.512 true 00:28:11.512 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:11.512 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.771 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:12.031 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:28:12.031 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:28:12.290 true 00:28:12.290 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:12.290 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.549 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:12.807 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:28:12.807 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:28:12.807 true 00:28:12.807 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:12.807 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.067 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:13.326 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:28:13.326 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:28:13.585 true 00:28:13.585 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:13.585 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.844 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:14.103 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:28:14.103 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:28:14.103 Initializing NVMe Controllers 00:28:14.103 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:14.103 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:28:14.103 Controller IO queue size 128, less than required. 00:28:14.103 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:14.103 WARNING: Some requested NVMe devices were skipped 00:28:14.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:14.103 Initialization complete. Launching workers. 00:28:14.103 ======================================================== 00:28:14.103 Latency(us) 00:28:14.103 Device Information : IOPS MiB/s Average min max 00:28:14.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27137.35 13.25 4716.63 1199.97 44134.55 00:28:14.103 ======================================================== 00:28:14.103 Total : 27137.35 13.25 4716.63 1199.97 44134.55 00:28:14.103 00:28:14.103 true 00:28:14.103 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1632122 00:28:14.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1632122) - No such process 00:28:14.103 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1632122 00:28:14.103 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.362 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:14.621 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:14.621 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:14.621 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:14.621 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:14.621 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:14.621 null0 00:28:14.621 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:14.621 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:14.881 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:14.881 null1 00:28:14.881 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:14.881 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:14.881 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:15.140 null2 00:28:15.140 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:15.140 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:15.140 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:15.399 null3 00:28:15.399 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:15.399 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:15.399 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:15.399 null4 00:28:15.659 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:15.659 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:15.659 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:15.659 null5 00:28:15.659 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:15.659 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:15.659 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:15.918 null6 00:28:15.918 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:15.918 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:15.918 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:16.178 null7 00:28:16.178 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:16.178 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:16.178 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:16.178 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.178 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:16.178 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.178 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:16.178 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.178 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.178 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.178 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:16.178 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.178 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.178 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:16.178 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.178 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.178 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:16.178 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1637383 1637385 1637386 1637388 1637390 1637392 1637394 1637395 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.179 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:16.438 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.439 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:16.698 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.699 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:16.699 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:16.699 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:16.699 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:16.699 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:16.699 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:16.699 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:16.958 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.958 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.958 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:16.958 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.958 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.958 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:16.958 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.958 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.958 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.958 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.958 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:16.958 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:16.958 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.958 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.959 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:16.959 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.959 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.959 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:16.959 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.959 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.959 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:16.959 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.959 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.959 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:17.218 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.218 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:17.218 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:17.218 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:17.218 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:17.218 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:17.218 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:17.218 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:17.218 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.218 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.218 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:17.478 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:17.737 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.737 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.737 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:17.737 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.737 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.737 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:17.737 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.737 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.737 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:17.737 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.737 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.737 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:17.737 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.738 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.738 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:17.738 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.738 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.738 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:17.738 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.738 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.738 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:17.738 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.738 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.738 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:17.997 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.997 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:17.997 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:17.997 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:17.997 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:17.997 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:17.997 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:17.997 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:18.256 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.256 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.256 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:18.256 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.256 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.256 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:18.256 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.256 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.256 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:18.256 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.256 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.256 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:18.256 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.256 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.256 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.256 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:18.256 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.256 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:18.256 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.256 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.257 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:18.257 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.257 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.257 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:18.257 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.257 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.515 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:18.774 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:18.774 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.774 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:18.774 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:18.774 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:18.774 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:18.774 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:18.774 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:19.033 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.033 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.034 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:19.293 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:19.293 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:19.293 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:19.293 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:19.293 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.293 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:19.294 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:19.294 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:19.294 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.294 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.294 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:19.553 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.813 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:20.072 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:20.072 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:20.072 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:20.072 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:20.072 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:20.072 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:20.072 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.072 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:20.332 rmmod nvme_tcp 00:28:20.332 rmmod nvme_fabrics 00:28:20.332 rmmod nvme_keyring 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1631711 ']' 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1631711 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1631711 ']' 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1631711 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1631711 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1631711' 00:28:20.332 killing process with pid 1631711 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1631711 00:28:20.332 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1631711 00:28:20.592 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:20.592 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:20.592 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:20.592 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:20.592 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:20.592 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:20.592 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:20.592 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:20.592 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:20.592 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.592 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.592 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.131 14:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:23.131 00:28:23.131 real 0m47.252s 00:28:23.131 user 3m2.783s 00:28:23.131 sys 0m21.705s 00:28:23.131 14:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:23.131 14:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:23.131 ************************************ 00:28:23.131 END TEST nvmf_ns_hotplug_stress 00:28:23.131 ************************************ 00:28:23.131 14:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:23.131 14:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:23.131 14:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:23.131 14:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:23.131 ************************************ 00:28:23.131 START TEST nvmf_delete_subsystem 00:28:23.131 ************************************ 00:28:23.131 14:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:23.131 * Looking for test storage... 00:28:23.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:23.131 14:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:23.131 14:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:28:23.131 14:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:23.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.131 --rc genhtml_branch_coverage=1 00:28:23.131 --rc genhtml_function_coverage=1 00:28:23.131 --rc genhtml_legend=1 00:28:23.131 --rc geninfo_all_blocks=1 00:28:23.131 --rc geninfo_unexecuted_blocks=1 00:28:23.131 00:28:23.131 ' 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:23.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.131 --rc genhtml_branch_coverage=1 00:28:23.131 --rc genhtml_function_coverage=1 00:28:23.131 --rc genhtml_legend=1 00:28:23.131 --rc geninfo_all_blocks=1 00:28:23.131 --rc geninfo_unexecuted_blocks=1 00:28:23.131 00:28:23.131 ' 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:23.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.131 --rc genhtml_branch_coverage=1 00:28:23.131 --rc genhtml_function_coverage=1 00:28:23.131 --rc genhtml_legend=1 00:28:23.131 --rc geninfo_all_blocks=1 00:28:23.131 --rc geninfo_unexecuted_blocks=1 00:28:23.131 00:28:23.131 ' 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:23.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.131 --rc genhtml_branch_coverage=1 00:28:23.131 --rc genhtml_function_coverage=1 00:28:23.131 --rc genhtml_legend=1 00:28:23.131 --rc geninfo_all_blocks=1 00:28:23.131 --rc geninfo_unexecuted_blocks=1 00:28:23.131 00:28:23.131 ' 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:23.131 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:23.132 14:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:29.710 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:29.710 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:29.710 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:29.710 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:29.710 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:29.710 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:29.710 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:29.710 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:29.710 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:29.710 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:29.710 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:29.710 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:29.710 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:29.710 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:29.710 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:29.710 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:29.710 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:29.710 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:29.710 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:29.710 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:29.710 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:29.711 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:29.711 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:29.711 Found net devices under 0000:86:00.0: cvl_0_0 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:29.711 Found net devices under 0000:86:00.1: cvl_0_1 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:29.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:29.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:28:29.711 00:28:29.711 --- 10.0.0.2 ping statistics --- 00:28:29.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.711 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:29.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:29.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:28:29.711 00:28:29.711 --- 10.0.0.1 ping statistics --- 00:28:29.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.711 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:29.711 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:29.712 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:29.712 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:29.712 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:29.712 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1641750 00:28:29.712 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1641750 00:28:29.712 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:29.712 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1641750 ']' 00:28:29.712 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.712 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:29.712 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.712 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:29.712 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:29.712 [2024-11-17 14:38:18.000523] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:29.712 [2024-11-17 14:38:18.001519] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:28:29.712 [2024-11-17 14:38:18.001557] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.712 [2024-11-17 14:38:18.078251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:29.712 [2024-11-17 14:38:18.118107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:29.712 [2024-11-17 14:38:18.118141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:29.712 [2024-11-17 14:38:18.118149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:29.712 [2024-11-17 14:38:18.118156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:29.712 [2024-11-17 14:38:18.118161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:29.712 [2024-11-17 14:38:18.119280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.712 [2024-11-17 14:38:18.119281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.712 [2024-11-17 14:38:18.186200] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:29.712 [2024-11-17 14:38:18.186711] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:29.712 [2024-11-17 14:38:18.186955] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:29.712 [2024-11-17 14:38:18.264095] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:29.712 [2024-11-17 14:38:18.292455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:29.712 NULL1 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:29.712 Delay0 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1641777 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:29.712 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:29.712 [2024-11-17 14:38:18.404923] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:31.617 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:31.617 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.617 14:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 starting I/O failed: -6 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 starting I/O failed: -6 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 starting I/O failed: -6 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 starting I/O failed: -6 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 starting I/O failed: -6 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 starting I/O failed: -6 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 starting I/O failed: -6 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 starting I/O failed: -6 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 starting I/O failed: -6 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 starting I/O failed: -6 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 starting I/O failed: -6 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 [2024-11-17 14:38:20.560738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e12c0 is same with the state(6) to be set 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Read completed with error (sct=0, sc=8) 00:28:31.617 Write completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 starting I/O failed: -6 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 starting I/O failed: -6 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 starting I/O failed: -6 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 starting I/O failed: -6 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 starting I/O failed: -6 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 starting I/O failed: -6 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 starting I/O failed: -6 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 starting I/O failed: -6 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 starting I/O failed: -6 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 [2024-11-17 14:38:20.564475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0418000c40 is same with the state(6) to be set 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Write completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 Read completed with error (sct=0, sc=8) 00:28:31.618 [2024-11-17 14:38:20.565181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f041800d4d0 is same with the state(6) to be set 00:28:32.554 [2024-11-17 14:38:21.540434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e29a0 is same with the state(6) to be set 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Write completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Write completed with error (sct=0, sc=8) 00:28:32.554 Write completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Write completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Write completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 [2024-11-17 14:38:21.563887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e14a0 is same with the state(6) to be set 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Write completed with error (sct=0, sc=8) 00:28:32.554 Write completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Write completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Write completed with error (sct=0, sc=8) 00:28:32.554 Write completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.554 Read completed with error (sct=0, sc=8) 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Write completed with error (sct=0, sc=8) 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Write completed with error (sct=0, sc=8) 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Write completed with error (sct=0, sc=8) 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Write completed with error (sct=0, sc=8) 00:28:32.555 [2024-11-17 14:38:21.564270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1860 is same with the state(6) to be set 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Write completed with error (sct=0, sc=8) 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Write completed with error (sct=0, sc=8) 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Write completed with error (sct=0, sc=8) 00:28:32.555 Write completed with error (sct=0, sc=8) 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Write completed with error (sct=0, sc=8) 00:28:32.555 [2024-11-17 14:38:21.567813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f041800d020 is same with the state(6) to be set 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Write completed with error (sct=0, sc=8) 00:28:32.555 Write completed with error (sct=0, sc=8) 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Read completed with error (sct=0, sc=8) 00:28:32.555 Write completed with error (sct=0, sc=8) 00:28:32.555 [2024-11-17 14:38:21.568543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f041800d800 is same with the state(6) to be set 00:28:32.555 Initializing NVMe Controllers 00:28:32.555 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:32.555 Controller IO queue size 128, less than required. 00:28:32.555 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.555 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:32.555 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:32.555 Initialization complete. Launching workers. 00:28:32.555 ======================================================== 00:28:32.555 Latency(us) 00:28:32.555 Device Information : IOPS MiB/s Average min max 00:28:32.555 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.71 0.08 896999.59 335.01 1005930.68 00:28:32.555 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 147.81 0.07 951035.72 728.71 1010179.19 00:28:32.555 ======================================================== 00:28:32.555 Total : 316.52 0.15 922233.44 335.01 1010179.19 00:28:32.555 00:28:32.555 [2024-11-17 14:38:21.569205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e29a0 (9): Bad file descriptor 00:28:32.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:32.555 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.555 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:32.555 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1641777 00:28:32.555 14:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1641777 00:28:33.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1641777) - No such process 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1641777 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1641777 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1641777 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.125 [2024-11-17 14:38:22.104429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1642465 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1642465 00:28:33.125 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:33.125 [2024-11-17 14:38:22.188034] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:33.694 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:33.694 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1642465 00:28:33.694 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:33.953 14:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:33.953 14:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1642465 00:28:33.953 14:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:34.520 14:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:34.520 14:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1642465 00:28:34.520 14:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:35.089 14:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:35.089 14:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1642465 00:28:35.089 14:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:35.659 14:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:35.659 14:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1642465 00:28:35.659 14:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:36.228 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:36.228 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1642465 00:28:36.228 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:36.228 Initializing NVMe Controllers 00:28:36.228 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:36.228 Controller IO queue size 128, less than required. 00:28:36.228 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:36.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:36.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:36.228 Initialization complete. Launching workers. 00:28:36.228 ======================================================== 00:28:36.228 Latency(us) 00:28:36.228 Device Information : IOPS MiB/s Average min max 00:28:36.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002506.60 1000123.44 1041382.06 00:28:36.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003700.20 1000191.59 1010546.24 00:28:36.228 ======================================================== 00:28:36.228 Total : 256.00 0.12 1003103.40 1000123.44 1041382.06 00:28:36.228 00:28:36.487 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:36.487 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1642465 00:28:36.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1642465) - No such process 00:28:36.487 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1642465 00:28:36.487 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:36.487 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:36.487 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:36.487 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:36.487 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:36.487 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:36.487 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:36.487 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:36.487 rmmod nvme_tcp 00:28:36.487 rmmod nvme_fabrics 00:28:36.487 rmmod nvme_keyring 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1641750 ']' 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1641750 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1641750 ']' 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1641750 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1641750 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1641750' 00:28:36.747 killing process with pid 1641750 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1641750 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1641750 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:36.747 14:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.284 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:39.284 00:28:39.284 real 0m16.171s 00:28:39.284 user 0m26.031s 00:28:39.284 sys 0m6.203s 00:28:39.284 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:39.284 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:39.284 ************************************ 00:28:39.284 END TEST nvmf_delete_subsystem 00:28:39.284 ************************************ 00:28:39.284 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:39.284 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:39.284 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:39.284 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:39.284 ************************************ 00:28:39.284 START TEST nvmf_host_management 00:28:39.284 ************************************ 00:28:39.284 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:39.284 * Looking for test storage... 00:28:39.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:39.284 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:39.284 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:28:39.284 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:39.284 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:39.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.285 --rc genhtml_branch_coverage=1 00:28:39.285 --rc genhtml_function_coverage=1 00:28:39.285 --rc genhtml_legend=1 00:28:39.285 --rc geninfo_all_blocks=1 00:28:39.285 --rc geninfo_unexecuted_blocks=1 00:28:39.285 00:28:39.285 ' 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:39.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.285 --rc genhtml_branch_coverage=1 00:28:39.285 --rc genhtml_function_coverage=1 00:28:39.285 --rc genhtml_legend=1 00:28:39.285 --rc geninfo_all_blocks=1 00:28:39.285 --rc geninfo_unexecuted_blocks=1 00:28:39.285 00:28:39.285 ' 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:39.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.285 --rc genhtml_branch_coverage=1 00:28:39.285 --rc genhtml_function_coverage=1 00:28:39.285 --rc genhtml_legend=1 00:28:39.285 --rc geninfo_all_blocks=1 00:28:39.285 --rc geninfo_unexecuted_blocks=1 00:28:39.285 00:28:39.285 ' 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:39.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.285 --rc genhtml_branch_coverage=1 00:28:39.285 --rc genhtml_function_coverage=1 00:28:39.285 --rc genhtml_legend=1 00:28:39.285 --rc geninfo_all_blocks=1 00:28:39.285 --rc geninfo_unexecuted_blocks=1 00:28:39.285 00:28:39.285 ' 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.285 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:39.286 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:39.286 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:39.286 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:39.286 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:39.286 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:39.286 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:39.286 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:39.286 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:39.286 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.286 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:39.286 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:39.286 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:39.286 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.286 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.286 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.286 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:39.286 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:39.286 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:39.286 14:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.862 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:45.863 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:45.863 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:45.863 Found net devices under 0000:86:00.0: cvl_0_0 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:45.863 Found net devices under 0000:86:00.1: cvl_0_1 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:45.863 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:45.863 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:45.863 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:45.863 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:45.863 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:45.863 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:45.863 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:45.863 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:45.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:28:45.864 00:28:45.864 --- 10.0.0.2 ping statistics --- 00:28:45.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.864 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:45.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:28:45.864 00:28:45.864 --- 10.0.0.1 ping statistics --- 00:28:45.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.864 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1646451 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1646451 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1646451 ']' 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.864 14:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:45.864 [2024-11-17 14:38:34.282066] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:45.864 [2024-11-17 14:38:34.282990] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:28:45.864 [2024-11-17 14:38:34.283024] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.864 [2024-11-17 14:38:34.361320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:45.864 [2024-11-17 14:38:34.404116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.864 [2024-11-17 14:38:34.404155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.864 [2024-11-17 14:38:34.404162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.864 [2024-11-17 14:38:34.404169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.864 [2024-11-17 14:38:34.404174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.864 [2024-11-17 14:38:34.405701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:45.864 [2024-11-17 14:38:34.405809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:45.864 [2024-11-17 14:38:34.405914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.864 [2024-11-17 14:38:34.405915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:45.864 [2024-11-17 14:38:34.473393] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:45.864 [2024-11-17 14:38:34.474446] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:45.864 [2024-11-17 14:38:34.474506] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:45.864 [2024-11-17 14:38:34.474836] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:45.864 [2024-11-17 14:38:34.474895] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:46.124 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.124 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:46.124 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:46.124 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:46.124 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:46.124 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:46.124 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:46.124 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.124 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:46.124 [2024-11-17 14:38:35.146646] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.124 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.124 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:46.124 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:46.124 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:46.124 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:46.124 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:46.124 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:46.124 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.124 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:46.124 Malloc0 00:28:46.124 [2024-11-17 14:38:35.234827] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.124 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.124 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:46.125 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:46.125 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:46.125 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1646715 00:28:46.125 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1646715 /var/tmp/bdevperf.sock 00:28:46.125 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1646715 ']' 00:28:46.125 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:46.125 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:46.125 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:46.125 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:46.125 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:46.125 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:46.125 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.125 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:46.125 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.125 { 00:28:46.125 "params": { 00:28:46.125 "name": "Nvme$subsystem", 00:28:46.125 "trtype": "$TEST_TRANSPORT", 00:28:46.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.125 "adrfam": "ipv4", 00:28:46.125 "trsvcid": "$NVMF_PORT", 00:28:46.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.125 "hdgst": ${hdgst:-false}, 00:28:46.125 "ddgst": ${ddgst:-false} 00:28:46.125 }, 00:28:46.125 "method": "bdev_nvme_attach_controller" 00:28:46.125 } 00:28:46.125 EOF 00:28:46.125 )") 00:28:46.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:46.125 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:46.125 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:46.125 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:46.125 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:46.125 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:46.125 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:46.125 "params": { 00:28:46.125 "name": "Nvme0", 00:28:46.125 "trtype": "tcp", 00:28:46.125 "traddr": "10.0.0.2", 00:28:46.125 "adrfam": "ipv4", 00:28:46.125 "trsvcid": "4420", 00:28:46.125 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:46.125 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:46.125 "hdgst": false, 00:28:46.125 "ddgst": false 00:28:46.125 }, 00:28:46.125 "method": "bdev_nvme_attach_controller" 00:28:46.125 }' 00:28:46.125 [2024-11-17 14:38:35.335639] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:28:46.125 [2024-11-17 14:38:35.335686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1646715 ] 00:28:46.385 [2024-11-17 14:38:35.412915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.385 [2024-11-17 14:38:35.454178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.644 Running I/O for 10 seconds... 00:28:46.644 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.644 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:46.644 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:46.644 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.644 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:46.644 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.644 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:46.644 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:46.645 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:46.645 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:46.645 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:46.645 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:46.645 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:46.645 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:46.645 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:46.645 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:46.645 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.645 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:46.645 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.904 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:28:46.904 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:28:46.904 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:47.165 14:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:47.165 14:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:47.165 14:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:47.165 14:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:47.165 14:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.165 14:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:47.165 14:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.165 14:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:28:47.165 14:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:28:47.166 14:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:47.166 14:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:47.166 14:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:47.166 14:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:47.166 14:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.166 14:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:47.166 [2024-11-17 14:38:36.178579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.178989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.178995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.179003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.179010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.179018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.179025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.179033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.179041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.179049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.179056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.179064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.179071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.179079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.179086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.179094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.179100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.179108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.179114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.166 [2024-11-17 14:38:36.179123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.166 [2024-11-17 14:38:36.179130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.167 [2024-11-17 14:38:36.179588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.179616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.167 [2024-11-17 14:38:36.180565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:47.167 task offset: 98304 on job bdev=Nvme0n1 fails 00:28:47.167 00:28:47.167 Latency(us) 00:28:47.167 [2024-11-17T13:38:36.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.167 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.167 Job: Nvme0n1 ended in about 0.40 seconds with error 00:28:47.167 Verification LBA range: start 0x0 length 0x400 00:28:47.167 Nvme0n1 : 0.40 1897.33 118.58 158.11 0.00 30293.66 2535.96 27582.11 00:28:47.167 [2024-11-17T13:38:36.392Z] =================================================================================================================== 00:28:47.167 [2024-11-17T13:38:36.392Z] Total : 1897.33 118.58 158.11 0.00 30293.66 2535.96 27582.11 00:28:47.167 [2024-11-17 14:38:36.182980] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:47.167 [2024-11-17 14:38:36.183002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b0500 (9): Bad file descriptor 00:28:47.167 14:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.167 14:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:47.167 [2024-11-17 14:38:36.184010] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:28:47.167 14:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.167 [2024-11-17 14:38:36.184131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:47.167 [2024-11-17 14:38:36.184153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.167 [2024-11-17 14:38:36.184169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:28:47.167 [2024-11-17 14:38:36.184177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:28:47.168 [2024-11-17 14:38:36.184184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.168 [2024-11-17 14:38:36.184191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b0500 00:28:47.168 [2024-11-17 14:38:36.184209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b0500 (9): Bad file descriptor 00:28:47.168 [2024-11-17 14:38:36.184221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:47.168 [2024-11-17 14:38:36.184228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:47.168 [2024-11-17 14:38:36.184236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:47.168 [2024-11-17 14:38:36.184244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:47.168 14:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:47.168 14:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.168 14:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:48.106 14:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1646715 00:28:48.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1646715) - No such process 00:28:48.106 14:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:48.106 14:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:48.106 14:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:48.106 14:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:48.106 14:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:48.106 14:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:48.106 14:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:48.106 14:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:48.106 { 00:28:48.106 "params": { 00:28:48.106 "name": "Nvme$subsystem", 00:28:48.106 "trtype": "$TEST_TRANSPORT", 00:28:48.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.106 "adrfam": "ipv4", 00:28:48.106 "trsvcid": "$NVMF_PORT", 00:28:48.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.106 "hdgst": ${hdgst:-false}, 00:28:48.106 "ddgst": ${ddgst:-false} 00:28:48.106 }, 00:28:48.106 "method": "bdev_nvme_attach_controller" 00:28:48.106 } 00:28:48.106 EOF 00:28:48.106 )") 00:28:48.106 14:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:48.106 14:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:48.106 14:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:48.106 14:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:48.106 "params": { 00:28:48.106 "name": "Nvme0", 00:28:48.106 "trtype": "tcp", 00:28:48.106 "traddr": "10.0.0.2", 00:28:48.106 "adrfam": "ipv4", 00:28:48.106 "trsvcid": "4420", 00:28:48.106 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:48.106 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:48.106 "hdgst": false, 00:28:48.106 "ddgst": false 00:28:48.106 }, 00:28:48.106 "method": "bdev_nvme_attach_controller" 00:28:48.106 }' 00:28:48.106 [2024-11-17 14:38:37.249960] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:28:48.106 [2024-11-17 14:38:37.250009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1646970 ] 00:28:48.106 [2024-11-17 14:38:37.324786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.365 [2024-11-17 14:38:37.363983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.365 Running I/O for 1 seconds... 00:28:49.744 1854.00 IOPS, 115.88 MiB/s 00:28:49.744 Latency(us) 00:28:49.744 [2024-11-17T13:38:38.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.744 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.744 Verification LBA range: start 0x0 length 0x400 00:28:49.744 Nvme0n1 : 1.03 1864.55 116.53 0.00 0.00 33772.31 6069.20 27582.11 00:28:49.744 [2024-11-17T13:38:38.969Z] =================================================================================================================== 00:28:49.744 [2024-11-17T13:38:38.969Z] Total : 1864.55 116.53 0.00 0.00 33772.31 6069.20 27582.11 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:49.744 rmmod nvme_tcp 00:28:49.744 rmmod nvme_fabrics 00:28:49.744 rmmod nvme_keyring 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1646451 ']' 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1646451 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1646451 ']' 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1646451 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1646451 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1646451' 00:28:49.744 killing process with pid 1646451 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1646451 00:28:49.744 14:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1646451 00:28:50.004 [2024-11-17 14:38:39.043024] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:50.004 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:50.004 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:50.004 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:50.004 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:50.004 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:50.004 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:50.004 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:50.004 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:50.004 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:50.004 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.004 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.004 14:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:52.543 00:28:52.543 real 0m13.053s 00:28:52.543 user 0m18.476s 00:28:52.543 sys 0m6.389s 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:52.543 ************************************ 00:28:52.543 END TEST nvmf_host_management 00:28:52.543 ************************************ 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:52.543 ************************************ 00:28:52.543 START TEST nvmf_lvol 00:28:52.543 ************************************ 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:52.543 * Looking for test storage... 00:28:52.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:52.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.543 --rc genhtml_branch_coverage=1 00:28:52.543 --rc genhtml_function_coverage=1 00:28:52.543 --rc genhtml_legend=1 00:28:52.543 --rc geninfo_all_blocks=1 00:28:52.543 --rc geninfo_unexecuted_blocks=1 00:28:52.543 00:28:52.543 ' 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:52.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.543 --rc genhtml_branch_coverage=1 00:28:52.543 --rc genhtml_function_coverage=1 00:28:52.543 --rc genhtml_legend=1 00:28:52.543 --rc geninfo_all_blocks=1 00:28:52.543 --rc geninfo_unexecuted_blocks=1 00:28:52.543 00:28:52.543 ' 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:52.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.543 --rc genhtml_branch_coverage=1 00:28:52.543 --rc genhtml_function_coverage=1 00:28:52.543 --rc genhtml_legend=1 00:28:52.543 --rc geninfo_all_blocks=1 00:28:52.543 --rc geninfo_unexecuted_blocks=1 00:28:52.543 00:28:52.543 ' 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:52.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.543 --rc genhtml_branch_coverage=1 00:28:52.543 --rc genhtml_function_coverage=1 00:28:52.543 --rc genhtml_legend=1 00:28:52.543 --rc geninfo_all_blocks=1 00:28:52.543 --rc geninfo_unexecuted_blocks=1 00:28:52.543 00:28:52.543 ' 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.543 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:52.544 14:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:59.170 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:59.170 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:59.170 Found net devices under 0000:86:00.0: cvl_0_0 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:59.170 Found net devices under 0000:86:00.1: cvl_0_1 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.170 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:59.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:28:59.171 00:28:59.171 --- 10.0.0.2 ping statistics --- 00:28:59.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.171 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:28:59.171 00:28:59.171 --- 10.0.0.1 ping statistics --- 00:28:59.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.171 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1650727 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1650727 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1650727 ']' 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.171 14:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:59.171 [2024-11-17 14:38:47.414629] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:59.171 [2024-11-17 14:38:47.415551] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:28:59.171 [2024-11-17 14:38:47.415584] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.171 [2024-11-17 14:38:47.495720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:59.171 [2024-11-17 14:38:47.539558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.171 [2024-11-17 14:38:47.539595] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.171 [2024-11-17 14:38:47.539603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.171 [2024-11-17 14:38:47.539609] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.171 [2024-11-17 14:38:47.539614] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.171 [2024-11-17 14:38:47.540907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.171 [2024-11-17 14:38:47.540925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.171 [2024-11-17 14:38:47.540930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.171 [2024-11-17 14:38:47.609096] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:59.171 [2024-11-17 14:38:47.609555] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:59.171 [2024-11-17 14:38:47.609735] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:59.171 [2024-11-17 14:38:47.609913] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:59.171 14:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:59.171 14:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:28:59.171 14:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:59.171 14:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:59.171 14:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:59.171 14:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.171 14:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:59.480 [2024-11-17 14:38:48.473838] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.480 14:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:59.739 14:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:59.739 14:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:59.739 14:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:59.739 14:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:59.998 14:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:00.257 14:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=73b2df60-9dd2-47bf-8e05-9544eb98f925 00:29:00.257 14:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 73b2df60-9dd2-47bf-8e05-9544eb98f925 lvol 20 00:29:00.516 14:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a1b75257-8377-4e70-9427-33c9b202dc53 00:29:00.516 14:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:00.516 14:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a1b75257-8377-4e70-9427-33c9b202dc53 00:29:00.775 14:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:01.034 [2024-11-17 14:38:50.113703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.035 14:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:01.294 14:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1651225 00:29:01.294 14:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:01.294 14:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:02.231 14:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a1b75257-8377-4e70-9427-33c9b202dc53 MY_SNAPSHOT 00:29:02.491 14:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=db2523f5-2b2e-4e76-92a2-3e6e871f160a 00:29:02.491 14:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a1b75257-8377-4e70-9427-33c9b202dc53 30 00:29:02.750 14:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone db2523f5-2b2e-4e76-92a2-3e6e871f160a MY_CLONE 00:29:03.009 14:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=eee23e2a-7a42-4d0f-b0b7-f48ee37890b8 00:29:03.009 14:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate eee23e2a-7a42-4d0f-b0b7-f48ee37890b8 00:29:03.577 14:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1651225 00:29:11.708 Initializing NVMe Controllers 00:29:11.708 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:11.708 Controller IO queue size 128, less than required. 00:29:11.708 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:11.708 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:11.708 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:11.708 Initialization complete. Launching workers. 00:29:11.708 ======================================================== 00:29:11.708 Latency(us) 00:29:11.708 Device Information : IOPS MiB/s Average min max 00:29:11.708 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12297.30 48.04 10409.62 2204.93 58063.74 00:29:11.708 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12192.20 47.63 10497.43 3863.62 56010.57 00:29:11.708 ======================================================== 00:29:11.708 Total : 24489.50 95.66 10453.34 2204.93 58063.74 00:29:11.708 00:29:11.708 14:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:11.708 14:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a1b75257-8377-4e70-9427-33c9b202dc53 00:29:11.968 14:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 73b2df60-9dd2-47bf-8e05-9544eb98f925 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:12.227 rmmod nvme_tcp 00:29:12.227 rmmod nvme_fabrics 00:29:12.227 rmmod nvme_keyring 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1650727 ']' 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1650727 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1650727 ']' 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1650727 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1650727 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1650727' 00:29:12.227 killing process with pid 1650727 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1650727 00:29:12.227 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1650727 00:29:12.487 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:12.487 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:12.487 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:12.487 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:12.487 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:12.487 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:12.487 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:12.487 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:12.487 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:12.487 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.487 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:12.487 14:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.393 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:14.393 00:29:14.393 real 0m22.373s 00:29:14.393 user 0m55.220s 00:29:14.393 sys 0m10.026s 00:29:14.393 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:14.393 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:14.393 ************************************ 00:29:14.393 END TEST nvmf_lvol 00:29:14.393 ************************************ 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:14.654 ************************************ 00:29:14.654 START TEST nvmf_lvs_grow 00:29:14.654 ************************************ 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:14.654 * Looking for test storage... 00:29:14.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:14.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.654 --rc genhtml_branch_coverage=1 00:29:14.654 --rc genhtml_function_coverage=1 00:29:14.654 --rc genhtml_legend=1 00:29:14.654 --rc geninfo_all_blocks=1 00:29:14.654 --rc geninfo_unexecuted_blocks=1 00:29:14.654 00:29:14.654 ' 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:14.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.654 --rc genhtml_branch_coverage=1 00:29:14.654 --rc genhtml_function_coverage=1 00:29:14.654 --rc genhtml_legend=1 00:29:14.654 --rc geninfo_all_blocks=1 00:29:14.654 --rc geninfo_unexecuted_blocks=1 00:29:14.654 00:29:14.654 ' 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:14.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.654 --rc genhtml_branch_coverage=1 00:29:14.654 --rc genhtml_function_coverage=1 00:29:14.654 --rc genhtml_legend=1 00:29:14.654 --rc geninfo_all_blocks=1 00:29:14.654 --rc geninfo_unexecuted_blocks=1 00:29:14.654 00:29:14.654 ' 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:14.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.654 --rc genhtml_branch_coverage=1 00:29:14.654 --rc genhtml_function_coverage=1 00:29:14.654 --rc genhtml_legend=1 00:29:14.654 --rc geninfo_all_blocks=1 00:29:14.654 --rc geninfo_unexecuted_blocks=1 00:29:14.654 00:29:14.654 ' 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.654 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.655 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.914 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:14.914 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:14.914 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:14.914 14:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:21.487 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:21.487 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:21.487 Found net devices under 0000:86:00.0: cvl_0_0 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:21.487 Found net devices under 0000:86:00.1: cvl_0_1 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:21.487 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:21.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:21.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:29:21.488 00:29:21.488 --- 10.0.0.2 ping statistics --- 00:29:21.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.488 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:21.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:21.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:29:21.488 00:29:21.488 --- 10.0.0.1 ping statistics --- 00:29:21.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.488 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1657089 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1657089 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1657089 ']' 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:21.488 14:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:21.488 [2024-11-17 14:39:09.803424] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:21.488 [2024-11-17 14:39:09.804350] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:29:21.488 [2024-11-17 14:39:09.804388] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:21.488 [2024-11-17 14:39:09.883393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.488 [2024-11-17 14:39:09.924040] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:21.488 [2024-11-17 14:39:09.924075] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:21.488 [2024-11-17 14:39:09.924082] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:21.488 [2024-11-17 14:39:09.924088] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:21.488 [2024-11-17 14:39:09.924093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:21.488 [2024-11-17 14:39:09.924643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.488 [2024-11-17 14:39:09.990359] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:21.488 [2024-11-17 14:39:09.990567] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:21.488 [2024-11-17 14:39:10.233278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:21.488 ************************************ 00:29:21.488 START TEST lvs_grow_clean 00:29:21.488 ************************************ 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:21.488 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:21.748 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=68d06e0d-d0dd-4345-aca6-533d54371767 00:29:21.748 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68d06e0d-d0dd-4345-aca6-533d54371767 00:29:21.748 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:21.748 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:21.748 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:21.748 14:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 68d06e0d-d0dd-4345-aca6-533d54371767 lvol 150 00:29:22.007 14:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ae0ec9f5-bf4d-4c98-8f4c-f243b7605582 00:29:22.007 14:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:22.007 14:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:22.267 [2024-11-17 14:39:11.285032] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:22.267 [2024-11-17 14:39:11.285164] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:22.267 true 00:29:22.267 14:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68d06e0d-d0dd-4345-aca6-533d54371767 00:29:22.267 14:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:22.526 14:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:22.526 14:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:22.526 14:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ae0ec9f5-bf4d-4c98-8f4c-f243b7605582 00:29:22.785 14:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:23.044 [2024-11-17 14:39:12.021543] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.044 14:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:23.044 14:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:23.044 14:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1657382 00:29:23.044 14:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:23.044 14:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1657382 /var/tmp/bdevperf.sock 00:29:23.044 14:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1657382 ']' 00:29:23.044 14:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:23.044 14:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:23.044 14:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:23.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:23.044 14:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:23.044 14:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:23.044 [2024-11-17 14:39:12.253078] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:29:23.044 [2024-11-17 14:39:12.253122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657382 ] 00:29:23.303 [2024-11-17 14:39:12.326465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.303 [2024-11-17 14:39:12.372895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.303 14:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:23.303 14:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:23.303 14:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:23.872 Nvme0n1 00:29:23.872 14:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:23.872 [ 00:29:23.872 { 00:29:23.872 "name": "Nvme0n1", 00:29:23.872 "aliases": [ 00:29:23.872 "ae0ec9f5-bf4d-4c98-8f4c-f243b7605582" 00:29:23.872 ], 00:29:23.872 "product_name": "NVMe disk", 00:29:23.872 "block_size": 4096, 00:29:23.872 "num_blocks": 38912, 00:29:23.872 "uuid": "ae0ec9f5-bf4d-4c98-8f4c-f243b7605582", 00:29:23.872 "numa_id": 1, 00:29:23.872 "assigned_rate_limits": { 00:29:23.872 "rw_ios_per_sec": 0, 00:29:23.872 "rw_mbytes_per_sec": 0, 00:29:23.872 "r_mbytes_per_sec": 0, 00:29:23.872 "w_mbytes_per_sec": 0 00:29:23.872 }, 00:29:23.872 "claimed": false, 00:29:23.872 "zoned": false, 00:29:23.872 "supported_io_types": { 00:29:23.872 "read": true, 00:29:23.872 "write": true, 00:29:23.872 "unmap": true, 00:29:23.872 "flush": true, 00:29:23.872 "reset": true, 00:29:23.872 "nvme_admin": true, 00:29:23.872 "nvme_io": true, 00:29:23.872 "nvme_io_md": false, 00:29:23.872 "write_zeroes": true, 00:29:23.872 "zcopy": false, 00:29:23.872 "get_zone_info": false, 00:29:23.872 "zone_management": false, 00:29:23.872 "zone_append": false, 00:29:23.872 "compare": true, 00:29:23.872 "compare_and_write": true, 00:29:23.872 "abort": true, 00:29:23.872 "seek_hole": false, 00:29:23.872 "seek_data": false, 00:29:23.872 "copy": true, 00:29:23.872 "nvme_iov_md": false 00:29:23.872 }, 00:29:23.872 "memory_domains": [ 00:29:23.872 { 00:29:23.872 "dma_device_id": "system", 00:29:23.872 "dma_device_type": 1 00:29:23.872 } 00:29:23.872 ], 00:29:23.872 "driver_specific": { 00:29:23.872 "nvme": [ 00:29:23.872 { 00:29:23.872 "trid": { 00:29:23.872 "trtype": "TCP", 00:29:23.872 "adrfam": "IPv4", 00:29:23.872 "traddr": "10.0.0.2", 00:29:23.872 "trsvcid": "4420", 00:29:23.872 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:23.872 }, 00:29:23.872 "ctrlr_data": { 00:29:23.872 "cntlid": 1, 00:29:23.872 "vendor_id": "0x8086", 00:29:23.872 "model_number": "SPDK bdev Controller", 00:29:23.872 "serial_number": "SPDK0", 00:29:23.872 "firmware_revision": "25.01", 00:29:23.872 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:23.872 "oacs": { 00:29:23.872 "security": 0, 00:29:23.872 "format": 0, 00:29:23.872 "firmware": 0, 00:29:23.872 "ns_manage": 0 00:29:23.872 }, 00:29:23.872 "multi_ctrlr": true, 00:29:23.872 "ana_reporting": false 00:29:23.872 }, 00:29:23.872 "vs": { 00:29:23.872 "nvme_version": "1.3" 00:29:23.872 }, 00:29:23.872 "ns_data": { 00:29:23.872 "id": 1, 00:29:23.872 "can_share": true 00:29:23.872 } 00:29:23.872 } 00:29:23.872 ], 00:29:23.872 "mp_policy": "active_passive" 00:29:23.872 } 00:29:23.872 } 00:29:23.872 ] 00:29:23.872 14:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1657595 00:29:23.872 14:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:23.872 14:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:24.131 Running I/O for 10 seconds... 00:29:25.069 Latency(us) 00:29:25.069 [2024-11-17T13:39:14.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:25.069 Nvme0n1 : 1.00 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:29:25.069 [2024-11-17T13:39:14.294Z] =================================================================================================================== 00:29:25.069 [2024-11-17T13:39:14.294Z] Total : 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:29:25.069 00:29:26.007 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 68d06e0d-d0dd-4345-aca6-533d54371767 00:29:26.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:26.007 Nvme0n1 : 2.00 22447.50 87.69 0.00 0.00 0.00 0.00 0.00 00:29:26.007 [2024-11-17T13:39:15.232Z] =================================================================================================================== 00:29:26.007 [2024-11-17T13:39:15.232Z] Total : 22447.50 87.69 0.00 0.00 0.00 0.00 0.00 00:29:26.007 00:29:26.007 true 00:29:26.007 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68d06e0d-d0dd-4345-aca6-533d54371767 00:29:26.007 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:26.266 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:26.266 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:26.266 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1657595 00:29:27.202 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:27.202 Nvme0n1 : 3.00 22548.33 88.08 0.00 0.00 0.00 0.00 0.00 00:29:27.202 [2024-11-17T13:39:16.427Z] =================================================================================================================== 00:29:27.202 [2024-11-17T13:39:16.427Z] Total : 22548.33 88.08 0.00 0.00 0.00 0.00 0.00 00:29:27.202 00:29:28.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:28.138 Nvme0n1 : 4.00 22642.25 88.45 0.00 0.00 0.00 0.00 0.00 00:29:28.138 [2024-11-17T13:39:17.363Z] =================================================================================================================== 00:29:28.138 [2024-11-17T13:39:17.363Z] Total : 22642.25 88.45 0.00 0.00 0.00 0.00 0.00 00:29:28.138 00:29:29.076 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:29.076 Nvme0n1 : 5.00 22708.20 88.70 0.00 0.00 0.00 0.00 0.00 00:29:29.076 [2024-11-17T13:39:18.301Z] =================================================================================================================== 00:29:29.076 [2024-11-17T13:39:18.301Z] Total : 22708.20 88.70 0.00 0.00 0.00 0.00 0.00 00:29:29.076 00:29:30.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:30.014 Nvme0n1 : 6.00 22765.33 88.93 0.00 0.00 0.00 0.00 0.00 00:29:30.014 [2024-11-17T13:39:19.239Z] =================================================================================================================== 00:29:30.014 [2024-11-17T13:39:19.239Z] Total : 22765.33 88.93 0.00 0.00 0.00 0.00 0.00 00:29:30.014 00:29:30.951 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:30.951 Nvme0n1 : 7.00 22778.86 88.98 0.00 0.00 0.00 0.00 0.00 00:29:30.951 [2024-11-17T13:39:20.176Z] =================================================================================================================== 00:29:30.951 [2024-11-17T13:39:20.176Z] Total : 22778.86 88.98 0.00 0.00 0.00 0.00 0.00 00:29:30.951 00:29:32.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:32.330 Nvme0n1 : 8.00 22757.25 88.90 0.00 0.00 0.00 0.00 0.00 00:29:32.330 [2024-11-17T13:39:21.555Z] =================================================================================================================== 00:29:32.330 [2024-11-17T13:39:21.555Z] Total : 22757.25 88.90 0.00 0.00 0.00 0.00 0.00 00:29:32.330 00:29:33.268 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:33.268 Nvme0n1 : 9.00 22789.89 89.02 0.00 0.00 0.00 0.00 0.00 00:29:33.268 [2024-11-17T13:39:22.493Z] =================================================================================================================== 00:29:33.268 [2024-11-17T13:39:22.493Z] Total : 22789.89 89.02 0.00 0.00 0.00 0.00 0.00 00:29:33.268 00:29:34.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.207 Nvme0n1 : 10.00 22820.80 89.14 0.00 0.00 0.00 0.00 0.00 00:29:34.207 [2024-11-17T13:39:23.432Z] =================================================================================================================== 00:29:34.207 [2024-11-17T13:39:23.432Z] Total : 22820.80 89.14 0.00 0.00 0.00 0.00 0.00 00:29:34.207 00:29:34.207 00:29:34.207 Latency(us) 00:29:34.207 [2024-11-17T13:39:23.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.207 Nvme0n1 : 10.01 22819.66 89.14 0.00 0.00 5606.35 3191.32 28379.94 00:29:34.207 [2024-11-17T13:39:23.432Z] =================================================================================================================== 00:29:34.207 [2024-11-17T13:39:23.432Z] Total : 22819.66 89.14 0.00 0.00 5606.35 3191.32 28379.94 00:29:34.207 { 00:29:34.207 "results": [ 00:29:34.207 { 00:29:34.207 "job": "Nvme0n1", 00:29:34.207 "core_mask": "0x2", 00:29:34.207 "workload": "randwrite", 00:29:34.207 "status": "finished", 00:29:34.207 "queue_depth": 128, 00:29:34.207 "io_size": 4096, 00:29:34.207 "runtime": 10.006107, 00:29:34.207 "iops": 22819.664031176162, 00:29:34.207 "mibps": 89.13931262178188, 00:29:34.207 "io_failed": 0, 00:29:34.207 "io_timeout": 0, 00:29:34.207 "avg_latency_us": 5606.34844637803, 00:29:34.207 "min_latency_us": 3191.318260869565, 00:29:34.207 "max_latency_us": 28379.93739130435 00:29:34.207 } 00:29:34.207 ], 00:29:34.207 "core_count": 1 00:29:34.207 } 00:29:34.207 14:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1657382 00:29:34.207 14:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1657382 ']' 00:29:34.207 14:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1657382 00:29:34.207 14:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:34.207 14:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:34.207 14:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1657382 00:29:34.207 14:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:34.207 14:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:34.207 14:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1657382' 00:29:34.207 killing process with pid 1657382 00:29:34.207 14:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1657382 00:29:34.207 Received shutdown signal, test time was about 10.000000 seconds 00:29:34.207 00:29:34.207 Latency(us) 00:29:34.207 [2024-11-17T13:39:23.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.207 [2024-11-17T13:39:23.432Z] =================================================================================================================== 00:29:34.207 [2024-11-17T13:39:23.432Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:34.207 14:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1657382 00:29:34.207 14:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:34.467 14:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:34.726 14:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68d06e0d-d0dd-4345-aca6-533d54371767 00:29:34.726 14:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:34.986 14:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:34.986 14:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:34.986 14:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:34.986 [2024-11-17 14:39:24.129092] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:34.986 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68d06e0d-d0dd-4345-aca6-533d54371767 00:29:34.986 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:34.986 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68d06e0d-d0dd-4345-aca6-533d54371767 00:29:34.986 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:34.986 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.986 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:34.986 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.986 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:34.986 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.986 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:34.986 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:34.986 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68d06e0d-d0dd-4345-aca6-533d54371767 00:29:35.246 request: 00:29:35.246 { 00:29:35.246 "uuid": "68d06e0d-d0dd-4345-aca6-533d54371767", 00:29:35.246 "method": "bdev_lvol_get_lvstores", 00:29:35.246 "req_id": 1 00:29:35.246 } 00:29:35.246 Got JSON-RPC error response 00:29:35.246 response: 00:29:35.246 { 00:29:35.246 "code": -19, 00:29:35.246 "message": "No such device" 00:29:35.246 } 00:29:35.246 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:29:35.246 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:35.246 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:35.246 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:35.246 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:35.506 aio_bdev 00:29:35.506 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ae0ec9f5-bf4d-4c98-8f4c-f243b7605582 00:29:35.506 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=ae0ec9f5-bf4d-4c98-8f4c-f243b7605582 00:29:35.506 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:35.506 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:29:35.506 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:35.506 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:35.506 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:35.765 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ae0ec9f5-bf4d-4c98-8f4c-f243b7605582 -t 2000 00:29:35.765 [ 00:29:35.765 { 00:29:35.765 "name": "ae0ec9f5-bf4d-4c98-8f4c-f243b7605582", 00:29:35.765 "aliases": [ 00:29:35.766 "lvs/lvol" 00:29:35.766 ], 00:29:35.766 "product_name": "Logical Volume", 00:29:35.766 "block_size": 4096, 00:29:35.766 "num_blocks": 38912, 00:29:35.766 "uuid": "ae0ec9f5-bf4d-4c98-8f4c-f243b7605582", 00:29:35.766 "assigned_rate_limits": { 00:29:35.766 "rw_ios_per_sec": 0, 00:29:35.766 "rw_mbytes_per_sec": 0, 00:29:35.766 "r_mbytes_per_sec": 0, 00:29:35.766 "w_mbytes_per_sec": 0 00:29:35.766 }, 00:29:35.766 "claimed": false, 00:29:35.766 "zoned": false, 00:29:35.766 "supported_io_types": { 00:29:35.766 "read": true, 00:29:35.766 "write": true, 00:29:35.766 "unmap": true, 00:29:35.766 "flush": false, 00:29:35.766 "reset": true, 00:29:35.766 "nvme_admin": false, 00:29:35.766 "nvme_io": false, 00:29:35.766 "nvme_io_md": false, 00:29:35.766 "write_zeroes": true, 00:29:35.766 "zcopy": false, 00:29:35.766 "get_zone_info": false, 00:29:35.766 "zone_management": false, 00:29:35.766 "zone_append": false, 00:29:35.766 "compare": false, 00:29:35.766 "compare_and_write": false, 00:29:35.766 "abort": false, 00:29:35.766 "seek_hole": true, 00:29:35.766 "seek_data": true, 00:29:35.766 "copy": false, 00:29:35.766 "nvme_iov_md": false 00:29:35.766 }, 00:29:35.766 "driver_specific": { 00:29:35.766 "lvol": { 00:29:35.766 "lvol_store_uuid": "68d06e0d-d0dd-4345-aca6-533d54371767", 00:29:35.766 "base_bdev": "aio_bdev", 00:29:35.766 "thin_provision": false, 00:29:35.766 "num_allocated_clusters": 38, 00:29:35.766 "snapshot": false, 00:29:35.766 "clone": false, 00:29:35.766 "esnap_clone": false 00:29:35.766 } 00:29:35.766 } 00:29:35.766 } 00:29:35.766 ] 00:29:35.766 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:29:35.766 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68d06e0d-d0dd-4345-aca6-533d54371767 00:29:35.766 14:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:36.025 14:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:36.025 14:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68d06e0d-d0dd-4345-aca6-533d54371767 00:29:36.025 14:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:36.285 14:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:36.285 14:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ae0ec9f5-bf4d-4c98-8f4c-f243b7605582 00:29:36.545 14:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 68d06e0d-d0dd-4345-aca6-533d54371767 00:29:36.545 14:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:36.804 14:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:36.804 00:29:36.804 real 0m15.670s 00:29:36.804 user 0m15.264s 00:29:36.804 sys 0m1.426s 00:29:36.804 14:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:36.804 14:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:36.804 ************************************ 00:29:36.804 END TEST lvs_grow_clean 00:29:36.804 ************************************ 00:29:36.804 14:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:36.804 14:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:36.804 14:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:36.804 14:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:36.804 ************************************ 00:29:36.804 START TEST lvs_grow_dirty 00:29:36.804 ************************************ 00:29:37.064 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:29:37.064 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:37.064 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:37.064 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:37.064 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:37.064 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:37.064 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:37.064 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:37.064 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:37.064 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:37.064 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:37.064 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:37.323 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4ed54763-9500-4a3b-98de-4053c775aa22 00:29:37.323 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ed54763-9500-4a3b-98de-4053c775aa22 00:29:37.323 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:37.583 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:37.583 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:37.583 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4ed54763-9500-4a3b-98de-4053c775aa22 lvol 150 00:29:37.842 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=15e7de04-7372-470f-82ed-96c26a524c67 00:29:37.842 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:37.842 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:37.842 [2024-11-17 14:39:27.029030] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:37.842 [2024-11-17 14:39:27.029163] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:37.842 true 00:29:37.842 14:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ed54763-9500-4a3b-98de-4053c775aa22 00:29:37.842 14:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:38.101 14:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:38.101 14:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:38.360 14:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 15e7de04-7372-470f-82ed-96c26a524c67 00:29:38.620 14:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:38.620 [2024-11-17 14:39:27.785464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.620 14:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:38.879 14:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1659954 00:29:38.879 14:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:38.879 14:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:38.879 14:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1659954 /var/tmp/bdevperf.sock 00:29:38.879 14:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1659954 ']' 00:29:38.879 14:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:38.879 14:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.879 14:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:38.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:38.879 14:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.879 14:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:38.879 [2024-11-17 14:39:28.038804] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:29:38.879 [2024-11-17 14:39:28.038858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1659954 ] 00:29:39.138 [2024-11-17 14:39:28.115631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.138 [2024-11-17 14:39:28.157870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.706 14:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:39.707 14:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:39.707 14:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:40.276 Nvme0n1 00:29:40.276 14:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:40.276 [ 00:29:40.276 { 00:29:40.276 "name": "Nvme0n1", 00:29:40.276 "aliases": [ 00:29:40.276 "15e7de04-7372-470f-82ed-96c26a524c67" 00:29:40.276 ], 00:29:40.276 "product_name": "NVMe disk", 00:29:40.276 "block_size": 4096, 00:29:40.276 "num_blocks": 38912, 00:29:40.276 "uuid": "15e7de04-7372-470f-82ed-96c26a524c67", 00:29:40.276 "numa_id": 1, 00:29:40.276 "assigned_rate_limits": { 00:29:40.276 "rw_ios_per_sec": 0, 00:29:40.276 "rw_mbytes_per_sec": 0, 00:29:40.276 "r_mbytes_per_sec": 0, 00:29:40.276 "w_mbytes_per_sec": 0 00:29:40.276 }, 00:29:40.276 "claimed": false, 00:29:40.276 "zoned": false, 00:29:40.276 "supported_io_types": { 00:29:40.276 "read": true, 00:29:40.276 "write": true, 00:29:40.276 "unmap": true, 00:29:40.276 "flush": true, 00:29:40.276 "reset": true, 00:29:40.276 "nvme_admin": true, 00:29:40.276 "nvme_io": true, 00:29:40.276 "nvme_io_md": false, 00:29:40.276 "write_zeroes": true, 00:29:40.276 "zcopy": false, 00:29:40.276 "get_zone_info": false, 00:29:40.276 "zone_management": false, 00:29:40.276 "zone_append": false, 00:29:40.276 "compare": true, 00:29:40.276 "compare_and_write": true, 00:29:40.276 "abort": true, 00:29:40.276 "seek_hole": false, 00:29:40.276 "seek_data": false, 00:29:40.276 "copy": true, 00:29:40.276 "nvme_iov_md": false 00:29:40.276 }, 00:29:40.276 "memory_domains": [ 00:29:40.276 { 00:29:40.276 "dma_device_id": "system", 00:29:40.276 "dma_device_type": 1 00:29:40.276 } 00:29:40.276 ], 00:29:40.276 "driver_specific": { 00:29:40.276 "nvme": [ 00:29:40.276 { 00:29:40.276 "trid": { 00:29:40.276 "trtype": "TCP", 00:29:40.276 "adrfam": "IPv4", 00:29:40.276 "traddr": "10.0.0.2", 00:29:40.276 "trsvcid": "4420", 00:29:40.276 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:40.276 }, 00:29:40.276 "ctrlr_data": { 00:29:40.276 "cntlid": 1, 00:29:40.276 "vendor_id": "0x8086", 00:29:40.276 "model_number": "SPDK bdev Controller", 00:29:40.276 "serial_number": "SPDK0", 00:29:40.276 "firmware_revision": "25.01", 00:29:40.276 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:40.276 "oacs": { 00:29:40.276 "security": 0, 00:29:40.276 "format": 0, 00:29:40.276 "firmware": 0, 00:29:40.276 "ns_manage": 0 00:29:40.276 }, 00:29:40.276 "multi_ctrlr": true, 00:29:40.276 "ana_reporting": false 00:29:40.276 }, 00:29:40.276 "vs": { 00:29:40.276 "nvme_version": "1.3" 00:29:40.276 }, 00:29:40.276 "ns_data": { 00:29:40.276 "id": 1, 00:29:40.276 "can_share": true 00:29:40.276 } 00:29:40.276 } 00:29:40.276 ], 00:29:40.276 "mp_policy": "active_passive" 00:29:40.276 } 00:29:40.276 } 00:29:40.276 ] 00:29:40.276 14:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1660182 00:29:40.276 14:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:40.276 14:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:40.535 Running I/O for 10 seconds... 00:29:41.473 Latency(us) 00:29:41.473 [2024-11-17T13:39:30.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:41.473 Nvme0n1 : 1.00 21781.00 85.08 0.00 0.00 0.00 0.00 0.00 00:29:41.473 [2024-11-17T13:39:30.698Z] =================================================================================================================== 00:29:41.473 [2024-11-17T13:39:30.698Z] Total : 21781.00 85.08 0.00 0.00 0.00 0.00 0.00 00:29:41.473 00:29:42.412 14:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4ed54763-9500-4a3b-98de-4053c775aa22 00:29:42.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:42.412 Nvme0n1 : 2.00 22257.00 86.94 0.00 0.00 0.00 0.00 0.00 00:29:42.412 [2024-11-17T13:39:31.637Z] =================================================================================================================== 00:29:42.412 [2024-11-17T13:39:31.637Z] Total : 22257.00 86.94 0.00 0.00 0.00 0.00 0.00 00:29:42.412 00:29:42.412 true 00:29:42.679 14:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ed54763-9500-4a3b-98de-4053c775aa22 00:29:42.679 14:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:42.679 14:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:42.679 14:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:42.679 14:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1660182 00:29:43.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:43.616 Nvme0n1 : 3.00 22458.00 87.73 0.00 0.00 0.00 0.00 0.00 00:29:43.616 [2024-11-17T13:39:32.841Z] =================================================================================================================== 00:29:43.616 [2024-11-17T13:39:32.841Z] Total : 22458.00 87.73 0.00 0.00 0.00 0.00 0.00 00:29:43.616 00:29:44.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:44.553 Nvme0n1 : 4.00 22590.25 88.24 0.00 0.00 0.00 0.00 0.00 00:29:44.553 [2024-11-17T13:39:33.778Z] =================================================================================================================== 00:29:44.553 [2024-11-17T13:39:33.778Z] Total : 22590.25 88.24 0.00 0.00 0.00 0.00 0.00 00:29:44.553 00:29:45.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:45.491 Nvme0n1 : 5.00 22669.60 88.55 0.00 0.00 0.00 0.00 0.00 00:29:45.491 [2024-11-17T13:39:34.716Z] =================================================================================================================== 00:29:45.491 [2024-11-17T13:39:34.716Z] Total : 22669.60 88.55 0.00 0.00 0.00 0.00 0.00 00:29:45.491 00:29:46.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:46.429 Nvme0n1 : 6.00 22722.50 88.76 0.00 0.00 0.00 0.00 0.00 00:29:46.429 [2024-11-17T13:39:35.654Z] =================================================================================================================== 00:29:46.429 [2024-11-17T13:39:35.654Z] Total : 22722.50 88.76 0.00 0.00 0.00 0.00 0.00 00:29:46.429 00:29:47.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:47.366 Nvme0n1 : 7.00 22769.43 88.94 0.00 0.00 0.00 0.00 0.00 00:29:47.366 [2024-11-17T13:39:36.591Z] =================================================================================================================== 00:29:47.366 [2024-11-17T13:39:36.591Z] Total : 22769.43 88.94 0.00 0.00 0.00 0.00 0.00 00:29:47.366 00:29:48.744 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:48.744 Nvme0n1 : 8.00 22794.75 89.04 0.00 0.00 0.00 0.00 0.00 00:29:48.744 [2024-11-17T13:39:37.969Z] =================================================================================================================== 00:29:48.744 [2024-11-17T13:39:37.969Z] Total : 22794.75 89.04 0.00 0.00 0.00 0.00 0.00 00:29:48.744 00:29:49.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:49.681 Nvme0n1 : 9.00 22819.89 89.14 0.00 0.00 0.00 0.00 0.00 00:29:49.681 [2024-11-17T13:39:38.906Z] =================================================================================================================== 00:29:49.681 [2024-11-17T13:39:38.906Z] Total : 22819.89 89.14 0.00 0.00 0.00 0.00 0.00 00:29:49.681 00:29:50.658 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:50.658 Nvme0n1 : 10.00 22843.00 89.23 0.00 0.00 0.00 0.00 0.00 00:29:50.658 [2024-11-17T13:39:39.883Z] =================================================================================================================== 00:29:50.658 [2024-11-17T13:39:39.883Z] Total : 22843.00 89.23 0.00 0.00 0.00 0.00 0.00 00:29:50.658 00:29:50.658 00:29:50.658 Latency(us) 00:29:50.658 [2024-11-17T13:39:39.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.658 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:50.658 Nvme0n1 : 10.00 22842.99 89.23 0.00 0.00 5600.19 3219.81 27354.16 00:29:50.658 [2024-11-17T13:39:39.883Z] =================================================================================================================== 00:29:50.658 [2024-11-17T13:39:39.883Z] Total : 22842.99 89.23 0.00 0.00 5600.19 3219.81 27354.16 00:29:50.658 { 00:29:50.658 "results": [ 00:29:50.658 { 00:29:50.658 "job": "Nvme0n1", 00:29:50.658 "core_mask": "0x2", 00:29:50.658 "workload": "randwrite", 00:29:50.658 "status": "finished", 00:29:50.658 "queue_depth": 128, 00:29:50.658 "io_size": 4096, 00:29:50.658 "runtime": 10.002806, 00:29:50.658 "iops": 22842.990256933903, 00:29:50.658 "mibps": 89.23043069114806, 00:29:50.658 "io_failed": 0, 00:29:50.658 "io_timeout": 0, 00:29:50.658 "avg_latency_us": 5600.1949777008695, 00:29:50.658 "min_latency_us": 3219.8121739130434, 00:29:50.658 "max_latency_us": 27354.15652173913 00:29:50.658 } 00:29:50.658 ], 00:29:50.658 "core_count": 1 00:29:50.658 } 00:29:50.658 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1659954 00:29:50.658 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1659954 ']' 00:29:50.658 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1659954 00:29:50.658 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:29:50.658 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:50.658 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1659954 00:29:50.658 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:50.658 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:50.658 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1659954' 00:29:50.658 killing process with pid 1659954 00:29:50.658 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1659954 00:29:50.658 Received shutdown signal, test time was about 10.000000 seconds 00:29:50.658 00:29:50.658 Latency(us) 00:29:50.658 [2024-11-17T13:39:39.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.658 [2024-11-17T13:39:39.883Z] =================================================================================================================== 00:29:50.658 [2024-11-17T13:39:39.883Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:50.658 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1659954 00:29:50.658 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:50.979 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:51.238 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ed54763-9500-4a3b-98de-4053c775aa22 00:29:51.238 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:51.238 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:51.238 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:51.238 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1657089 00:29:51.238 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1657089 00:29:51.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1657089 Killed "${NVMF_APP[@]}" "$@" 00:29:51.238 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:51.238 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:51.238 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:51.238 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:51.238 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:51.238 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1662025 00:29:51.238 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1662025 00:29:51.238 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:51.238 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1662025 ']' 00:29:51.238 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.238 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.238 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.238 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.238 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:51.498 [2024-11-17 14:39:40.483945] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:51.498 [2024-11-17 14:39:40.484879] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:29:51.498 [2024-11-17 14:39:40.484913] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.498 [2024-11-17 14:39:40.563499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.498 [2024-11-17 14:39:40.605087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.498 [2024-11-17 14:39:40.605126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.498 [2024-11-17 14:39:40.605133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.498 [2024-11-17 14:39:40.605139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.498 [2024-11-17 14:39:40.605145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.498 [2024-11-17 14:39:40.605698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.498 [2024-11-17 14:39:40.674257] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:51.498 [2024-11-17 14:39:40.674486] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:51.498 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:51.498 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:51.498 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:51.498 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:51.498 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:51.759 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:51.759 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:51.759 [2024-11-17 14:39:40.919078] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:51.759 [2024-11-17 14:39:40.919278] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:51.759 [2024-11-17 14:39:40.919373] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:51.759 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:51.759 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 15e7de04-7372-470f-82ed-96c26a524c67 00:29:51.759 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=15e7de04-7372-470f-82ed-96c26a524c67 00:29:51.759 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:51.759 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:51.759 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:51.759 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:51.759 14:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:52.018 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 15e7de04-7372-470f-82ed-96c26a524c67 -t 2000 00:29:52.278 [ 00:29:52.278 { 00:29:52.278 "name": "15e7de04-7372-470f-82ed-96c26a524c67", 00:29:52.278 "aliases": [ 00:29:52.278 "lvs/lvol" 00:29:52.278 ], 00:29:52.278 "product_name": "Logical Volume", 00:29:52.278 "block_size": 4096, 00:29:52.278 "num_blocks": 38912, 00:29:52.278 "uuid": "15e7de04-7372-470f-82ed-96c26a524c67", 00:29:52.278 "assigned_rate_limits": { 00:29:52.278 "rw_ios_per_sec": 0, 00:29:52.278 "rw_mbytes_per_sec": 0, 00:29:52.278 "r_mbytes_per_sec": 0, 00:29:52.278 "w_mbytes_per_sec": 0 00:29:52.278 }, 00:29:52.278 "claimed": false, 00:29:52.278 "zoned": false, 00:29:52.278 "supported_io_types": { 00:29:52.278 "read": true, 00:29:52.278 "write": true, 00:29:52.278 "unmap": true, 00:29:52.278 "flush": false, 00:29:52.278 "reset": true, 00:29:52.278 "nvme_admin": false, 00:29:52.278 "nvme_io": false, 00:29:52.278 "nvme_io_md": false, 00:29:52.278 "write_zeroes": true, 00:29:52.278 "zcopy": false, 00:29:52.278 "get_zone_info": false, 00:29:52.278 "zone_management": false, 00:29:52.278 "zone_append": false, 00:29:52.278 "compare": false, 00:29:52.278 "compare_and_write": false, 00:29:52.278 "abort": false, 00:29:52.278 "seek_hole": true, 00:29:52.278 "seek_data": true, 00:29:52.278 "copy": false, 00:29:52.278 "nvme_iov_md": false 00:29:52.278 }, 00:29:52.278 "driver_specific": { 00:29:52.278 "lvol": { 00:29:52.278 "lvol_store_uuid": "4ed54763-9500-4a3b-98de-4053c775aa22", 00:29:52.278 "base_bdev": "aio_bdev", 00:29:52.278 "thin_provision": false, 00:29:52.278 "num_allocated_clusters": 38, 00:29:52.278 "snapshot": false, 00:29:52.278 "clone": false, 00:29:52.278 "esnap_clone": false 00:29:52.278 } 00:29:52.278 } 00:29:52.278 } 00:29:52.278 ] 00:29:52.278 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:52.278 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ed54763-9500-4a3b-98de-4053c775aa22 00:29:52.278 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:52.538 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:52.538 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ed54763-9500-4a3b-98de-4053c775aa22 00:29:52.538 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:52.538 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:52.538 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:52.797 [2024-11-17 14:39:41.910152] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:52.797 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ed54763-9500-4a3b-98de-4053c775aa22 00:29:52.797 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:29:52.797 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ed54763-9500-4a3b-98de-4053c775aa22 00:29:52.797 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:52.797 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:52.797 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:52.797 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:52.797 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:52.797 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:52.797 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:52.797 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:52.797 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ed54763-9500-4a3b-98de-4053c775aa22 00:29:53.057 request: 00:29:53.057 { 00:29:53.057 "uuid": "4ed54763-9500-4a3b-98de-4053c775aa22", 00:29:53.057 "method": "bdev_lvol_get_lvstores", 00:29:53.057 "req_id": 1 00:29:53.057 } 00:29:53.057 Got JSON-RPC error response 00:29:53.057 response: 00:29:53.057 { 00:29:53.057 "code": -19, 00:29:53.057 "message": "No such device" 00:29:53.057 } 00:29:53.057 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:29:53.057 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:53.057 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:53.057 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:53.057 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:53.317 aio_bdev 00:29:53.317 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 15e7de04-7372-470f-82ed-96c26a524c67 00:29:53.317 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=15e7de04-7372-470f-82ed-96c26a524c67 00:29:53.317 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:53.317 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:53.317 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:53.317 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:53.317 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:53.576 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 15e7de04-7372-470f-82ed-96c26a524c67 -t 2000 00:29:53.576 [ 00:29:53.576 { 00:29:53.576 "name": "15e7de04-7372-470f-82ed-96c26a524c67", 00:29:53.576 "aliases": [ 00:29:53.576 "lvs/lvol" 00:29:53.576 ], 00:29:53.576 "product_name": "Logical Volume", 00:29:53.576 "block_size": 4096, 00:29:53.576 "num_blocks": 38912, 00:29:53.576 "uuid": "15e7de04-7372-470f-82ed-96c26a524c67", 00:29:53.576 "assigned_rate_limits": { 00:29:53.576 "rw_ios_per_sec": 0, 00:29:53.576 "rw_mbytes_per_sec": 0, 00:29:53.576 "r_mbytes_per_sec": 0, 00:29:53.576 "w_mbytes_per_sec": 0 00:29:53.576 }, 00:29:53.576 "claimed": false, 00:29:53.576 "zoned": false, 00:29:53.576 "supported_io_types": { 00:29:53.576 "read": true, 00:29:53.576 "write": true, 00:29:53.576 "unmap": true, 00:29:53.576 "flush": false, 00:29:53.576 "reset": true, 00:29:53.576 "nvme_admin": false, 00:29:53.577 "nvme_io": false, 00:29:53.577 "nvme_io_md": false, 00:29:53.577 "write_zeroes": true, 00:29:53.577 "zcopy": false, 00:29:53.577 "get_zone_info": false, 00:29:53.577 "zone_management": false, 00:29:53.577 "zone_append": false, 00:29:53.577 "compare": false, 00:29:53.577 "compare_and_write": false, 00:29:53.577 "abort": false, 00:29:53.577 "seek_hole": true, 00:29:53.577 "seek_data": true, 00:29:53.577 "copy": false, 00:29:53.577 "nvme_iov_md": false 00:29:53.577 }, 00:29:53.577 "driver_specific": { 00:29:53.577 "lvol": { 00:29:53.577 "lvol_store_uuid": "4ed54763-9500-4a3b-98de-4053c775aa22", 00:29:53.577 "base_bdev": "aio_bdev", 00:29:53.577 "thin_provision": false, 00:29:53.577 "num_allocated_clusters": 38, 00:29:53.577 "snapshot": false, 00:29:53.577 "clone": false, 00:29:53.577 "esnap_clone": false 00:29:53.577 } 00:29:53.577 } 00:29:53.577 } 00:29:53.577 ] 00:29:53.577 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:53.577 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ed54763-9500-4a3b-98de-4053c775aa22 00:29:53.577 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:53.837 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:53.837 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ed54763-9500-4a3b-98de-4053c775aa22 00:29:53.837 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:54.096 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:54.097 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 15e7de04-7372-470f-82ed-96c26a524c67 00:29:54.356 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4ed54763-9500-4a3b-98de-4053c775aa22 00:29:54.356 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:54.615 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:54.615 00:29:54.615 real 0m17.760s 00:29:54.615 user 0m35.429s 00:29:54.615 sys 0m3.756s 00:29:54.615 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:54.615 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:54.615 ************************************ 00:29:54.615 END TEST lvs_grow_dirty 00:29:54.615 ************************************ 00:29:54.615 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:54.615 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:29:54.615 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:29:54.615 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:29:54.615 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:54.615 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:29:54.615 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:29:54.615 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:29:54.615 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:54.615 nvmf_trace.0 00:29:54.874 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:29:54.874 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:54.874 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:54.874 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:54.875 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:54.875 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:54.875 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:54.875 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:54.875 rmmod nvme_tcp 00:29:54.875 rmmod nvme_fabrics 00:29:54.875 rmmod nvme_keyring 00:29:54.875 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:54.875 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:54.875 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:54.875 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1662025 ']' 00:29:54.875 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1662025 00:29:54.875 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1662025 ']' 00:29:54.875 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1662025 00:29:54.875 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:29:54.875 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:54.875 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1662025 00:29:54.875 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:54.875 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:54.875 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1662025' 00:29:54.875 killing process with pid 1662025 00:29:54.875 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1662025 00:29:54.875 14:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1662025 00:29:55.134 14:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:55.134 14:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:55.134 14:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:55.134 14:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:55.134 14:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:29:55.134 14:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:55.134 14:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:29:55.134 14:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:55.134 14:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:55.134 14:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.134 14:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:55.134 14:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.042 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:57.042 00:29:57.042 real 0m42.575s 00:29:57.042 user 0m53.212s 00:29:57.042 sys 0m10.042s 00:29:57.042 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:57.042 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:57.042 ************************************ 00:29:57.042 END TEST nvmf_lvs_grow 00:29:57.042 ************************************ 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:57.302 ************************************ 00:29:57.302 START TEST nvmf_bdev_io_wait 00:29:57.302 ************************************ 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:57.302 * Looking for test storage... 00:29:57.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:57.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.302 --rc genhtml_branch_coverage=1 00:29:57.302 --rc genhtml_function_coverage=1 00:29:57.302 --rc genhtml_legend=1 00:29:57.302 --rc geninfo_all_blocks=1 00:29:57.302 --rc geninfo_unexecuted_blocks=1 00:29:57.302 00:29:57.302 ' 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:57.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.302 --rc genhtml_branch_coverage=1 00:29:57.302 --rc genhtml_function_coverage=1 00:29:57.302 --rc genhtml_legend=1 00:29:57.302 --rc geninfo_all_blocks=1 00:29:57.302 --rc geninfo_unexecuted_blocks=1 00:29:57.302 00:29:57.302 ' 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:57.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.302 --rc genhtml_branch_coverage=1 00:29:57.302 --rc genhtml_function_coverage=1 00:29:57.302 --rc genhtml_legend=1 00:29:57.302 --rc geninfo_all_blocks=1 00:29:57.302 --rc geninfo_unexecuted_blocks=1 00:29:57.302 00:29:57.302 ' 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:57.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.302 --rc genhtml_branch_coverage=1 00:29:57.302 --rc genhtml_function_coverage=1 00:29:57.302 --rc genhtml_legend=1 00:29:57.302 --rc geninfo_all_blocks=1 00:29:57.302 --rc geninfo_unexecuted_blocks=1 00:29:57.302 00:29:57.302 ' 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.302 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:57.303 14:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:03.876 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:03.876 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:03.876 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:03.877 Found net devices under 0000:86:00.0: cvl_0_0 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:03.877 Found net devices under 0000:86:00.1: cvl_0_1 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:03.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:03.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:30:03.877 00:30:03.877 --- 10.0.0.2 ping statistics --- 00:30:03.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.877 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:03.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:03.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:30:03.877 00:30:03.877 --- 10.0.0.1 ping statistics --- 00:30:03.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.877 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1666068 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1666068 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1666068 ']' 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.877 [2024-11-17 14:39:52.475577] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:03.877 [2024-11-17 14:39:52.476497] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:30:03.877 [2024-11-17 14:39:52.476529] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:03.877 [2024-11-17 14:39:52.556199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:03.877 [2024-11-17 14:39:52.599626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:03.877 [2024-11-17 14:39:52.599663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:03.877 [2024-11-17 14:39:52.599670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:03.877 [2024-11-17 14:39:52.599676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:03.877 [2024-11-17 14:39:52.599681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:03.877 [2024-11-17 14:39:52.601245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.877 [2024-11-17 14:39:52.601416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:03.877 [2024-11-17 14:39:52.601463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.877 [2024-11-17 14:39:52.601463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:03.877 [2024-11-17 14:39:52.601865] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:03.877 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.878 [2024-11-17 14:39:52.738526] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:03.878 [2024-11-17 14:39:52.739002] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:03.878 [2024-11-17 14:39:52.739174] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:03.878 [2024-11-17 14:39:52.739313] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.878 [2024-11-17 14:39:52.750278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.878 Malloc0 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.878 [2024-11-17 14:39:52.818340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1666168 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1666170 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:03.878 { 00:30:03.878 "params": { 00:30:03.878 "name": "Nvme$subsystem", 00:30:03.878 "trtype": "$TEST_TRANSPORT", 00:30:03.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.878 "adrfam": "ipv4", 00:30:03.878 "trsvcid": "$NVMF_PORT", 00:30:03.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.878 "hdgst": ${hdgst:-false}, 00:30:03.878 "ddgst": ${ddgst:-false} 00:30:03.878 }, 00:30:03.878 "method": "bdev_nvme_attach_controller" 00:30:03.878 } 00:30:03.878 EOF 00:30:03.878 )") 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1666173 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:03.878 { 00:30:03.878 "params": { 00:30:03.878 "name": "Nvme$subsystem", 00:30:03.878 "trtype": "$TEST_TRANSPORT", 00:30:03.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.878 "adrfam": "ipv4", 00:30:03.878 "trsvcid": "$NVMF_PORT", 00:30:03.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.878 "hdgst": ${hdgst:-false}, 00:30:03.878 "ddgst": ${ddgst:-false} 00:30:03.878 }, 00:30:03.878 "method": "bdev_nvme_attach_controller" 00:30:03.878 } 00:30:03.878 EOF 00:30:03.878 )") 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1666177 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:03.878 { 00:30:03.878 "params": { 00:30:03.878 "name": "Nvme$subsystem", 00:30:03.878 "trtype": "$TEST_TRANSPORT", 00:30:03.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.878 "adrfam": "ipv4", 00:30:03.878 "trsvcid": "$NVMF_PORT", 00:30:03.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.878 "hdgst": ${hdgst:-false}, 00:30:03.878 "ddgst": ${ddgst:-false} 00:30:03.878 }, 00:30:03.878 "method": "bdev_nvme_attach_controller" 00:30:03.878 } 00:30:03.878 EOF 00:30:03.878 )") 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:03.878 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:03.878 { 00:30:03.878 "params": { 00:30:03.878 "name": "Nvme$subsystem", 00:30:03.878 "trtype": "$TEST_TRANSPORT", 00:30:03.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.878 "adrfam": "ipv4", 00:30:03.878 "trsvcid": "$NVMF_PORT", 00:30:03.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.878 "hdgst": ${hdgst:-false}, 00:30:03.878 "ddgst": ${ddgst:-false} 00:30:03.878 }, 00:30:03.878 "method": "bdev_nvme_attach_controller" 00:30:03.878 } 00:30:03.878 EOF 00:30:03.878 )") 00:30:03.879 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:03.879 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1666168 00:30:03.879 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:03.879 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:03.879 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:03.879 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:03.879 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:03.879 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:03.879 "params": { 00:30:03.879 "name": "Nvme1", 00:30:03.879 "trtype": "tcp", 00:30:03.879 "traddr": "10.0.0.2", 00:30:03.879 "adrfam": "ipv4", 00:30:03.879 "trsvcid": "4420", 00:30:03.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.879 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:03.879 "hdgst": false, 00:30:03.879 "ddgst": false 00:30:03.879 }, 00:30:03.879 "method": "bdev_nvme_attach_controller" 00:30:03.879 }' 00:30:03.879 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:03.879 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:03.879 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:03.879 "params": { 00:30:03.879 "name": "Nvme1", 00:30:03.879 "trtype": "tcp", 00:30:03.879 "traddr": "10.0.0.2", 00:30:03.879 "adrfam": "ipv4", 00:30:03.879 "trsvcid": "4420", 00:30:03.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.879 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:03.879 "hdgst": false, 00:30:03.879 "ddgst": false 00:30:03.879 }, 00:30:03.879 "method": "bdev_nvme_attach_controller" 00:30:03.879 }' 00:30:03.879 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:03.879 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:03.879 "params": { 00:30:03.879 "name": "Nvme1", 00:30:03.879 "trtype": "tcp", 00:30:03.879 "traddr": "10.0.0.2", 00:30:03.879 "adrfam": "ipv4", 00:30:03.879 "trsvcid": "4420", 00:30:03.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.879 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:03.879 "hdgst": false, 00:30:03.879 "ddgst": false 00:30:03.879 }, 00:30:03.879 "method": "bdev_nvme_attach_controller" 00:30:03.879 }' 00:30:03.879 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:03.879 14:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:03.879 "params": { 00:30:03.879 "name": "Nvme1", 00:30:03.879 "trtype": "tcp", 00:30:03.879 "traddr": "10.0.0.2", 00:30:03.879 "adrfam": "ipv4", 00:30:03.879 "trsvcid": "4420", 00:30:03.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.879 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:03.879 "hdgst": false, 00:30:03.879 "ddgst": false 00:30:03.879 }, 00:30:03.879 "method": "bdev_nvme_attach_controller" 00:30:03.879 }' 00:30:03.879 [2024-11-17 14:39:52.868697] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:30:03.879 [2024-11-17 14:39:52.868749] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:03.879 [2024-11-17 14:39:52.869669] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:30:03.879 [2024-11-17 14:39:52.869714] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:03.879 [2024-11-17 14:39:52.871642] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:30:03.879 [2024-11-17 14:39:52.871688] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:03.879 [2024-11-17 14:39:52.874703] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:30:03.879 [2024-11-17 14:39:52.874744] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:03.879 [2024-11-17 14:39:53.059845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.138 [2024-11-17 14:39:53.103358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:04.138 [2024-11-17 14:39:53.153668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.138 [2024-11-17 14:39:53.196792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:04.138 [2024-11-17 14:39:53.254282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.138 [2024-11-17 14:39:53.305541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:04.138 [2024-11-17 14:39:53.306779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.138 [2024-11-17 14:39:53.349435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:04.397 Running I/O for 1 seconds... 00:30:04.397 Running I/O for 1 seconds... 00:30:04.397 Running I/O for 1 seconds... 00:30:04.656 Running I/O for 1 seconds... 00:30:05.483 8535.00 IOPS, 33.34 MiB/s 00:30:05.483 Latency(us) 00:30:05.483 [2024-11-17T13:39:54.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.483 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:05.483 Nvme1n1 : 1.02 8570.97 33.48 0.00 0.00 14871.34 3405.02 27468.13 00:30:05.483 [2024-11-17T13:39:54.708Z] =================================================================================================================== 00:30:05.483 [2024-11-17T13:39:54.708Z] Total : 8570.97 33.48 0.00 0.00 14871.34 3405.02 27468.13 00:30:05.483 245992.00 IOPS, 960.91 MiB/s 00:30:05.483 Latency(us) 00:30:05.483 [2024-11-17T13:39:54.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.483 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:05.483 Nvme1n1 : 1.00 245610.46 959.42 0.00 0.00 518.47 229.73 1531.55 00:30:05.483 [2024-11-17T13:39:54.708Z] =================================================================================================================== 00:30:05.483 [2024-11-17T13:39:54.708Z] Total : 245610.46 959.42 0.00 0.00 518.47 229.73 1531.55 00:30:05.483 7943.00 IOPS, 31.03 MiB/s 00:30:05.483 Latency(us) 00:30:05.483 [2024-11-17T13:39:54.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.483 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:05.483 Nvme1n1 : 1.01 8046.10 31.43 0.00 0.00 15864.16 4673.00 26214.40 00:30:05.483 [2024-11-17T13:39:54.708Z] =================================================================================================================== 00:30:05.483 [2024-11-17T13:39:54.708Z] Total : 8046.10 31.43 0.00 0.00 15864.16 4673.00 26214.40 00:30:05.483 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1666170 00:30:05.483 12450.00 IOPS, 48.63 MiB/s 00:30:05.483 Latency(us) 00:30:05.483 [2024-11-17T13:39:54.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.483 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:05.483 Nvme1n1 : 1.01 12516.79 48.89 0.00 0.00 10198.90 3903.67 14816.83 00:30:05.483 [2024-11-17T13:39:54.708Z] =================================================================================================================== 00:30:05.483 [2024-11-17T13:39:54.708Z] Total : 12516.79 48.89 0.00 0.00 10198.90 3903.67 14816.83 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1666173 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1666177 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:05.743 rmmod nvme_tcp 00:30:05.743 rmmod nvme_fabrics 00:30:05.743 rmmod nvme_keyring 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1666068 ']' 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1666068 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1666068 ']' 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1666068 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1666068 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1666068' 00:30:05.743 killing process with pid 1666068 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1666068 00:30:05.743 14:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1666068 00:30:06.002 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:06.002 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:06.002 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:06.002 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:06.002 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:06.002 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:06.002 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:06.002 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:06.002 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:06.002 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.002 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.002 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.538 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:08.538 00:30:08.538 real 0m10.839s 00:30:08.538 user 0m15.508s 00:30:08.538 sys 0m6.427s 00:30:08.538 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:08.538 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:08.538 ************************************ 00:30:08.538 END TEST nvmf_bdev_io_wait 00:30:08.538 ************************************ 00:30:08.538 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:08.538 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:08.538 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:08.538 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:08.538 ************************************ 00:30:08.538 START TEST nvmf_queue_depth 00:30:08.538 ************************************ 00:30:08.538 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:08.539 * Looking for test storage... 00:30:08.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:08.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.539 --rc genhtml_branch_coverage=1 00:30:08.539 --rc genhtml_function_coverage=1 00:30:08.539 --rc genhtml_legend=1 00:30:08.539 --rc geninfo_all_blocks=1 00:30:08.539 --rc geninfo_unexecuted_blocks=1 00:30:08.539 00:30:08.539 ' 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:08.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.539 --rc genhtml_branch_coverage=1 00:30:08.539 --rc genhtml_function_coverage=1 00:30:08.539 --rc genhtml_legend=1 00:30:08.539 --rc geninfo_all_blocks=1 00:30:08.539 --rc geninfo_unexecuted_blocks=1 00:30:08.539 00:30:08.539 ' 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:08.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.539 --rc genhtml_branch_coverage=1 00:30:08.539 --rc genhtml_function_coverage=1 00:30:08.539 --rc genhtml_legend=1 00:30:08.539 --rc geninfo_all_blocks=1 00:30:08.539 --rc geninfo_unexecuted_blocks=1 00:30:08.539 00:30:08.539 ' 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:08.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.539 --rc genhtml_branch_coverage=1 00:30:08.539 --rc genhtml_function_coverage=1 00:30:08.539 --rc genhtml_legend=1 00:30:08.539 --rc geninfo_all_blocks=1 00:30:08.539 --rc geninfo_unexecuted_blocks=1 00:30:08.539 00:30:08.539 ' 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:08.539 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:08.540 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:13.819 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:13.819 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:13.819 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:13.819 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:13.819 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:13.819 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:13.819 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:13.819 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:13.819 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:13.819 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:13.819 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:13.819 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:13.819 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:13.819 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:13.819 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:13.819 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:13.819 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:13.819 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:13.819 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:14.080 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:14.080 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:14.080 Found net devices under 0000:86:00.0: cvl_0_0 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:14.080 Found net devices under 0000:86:00.1: cvl_0_1 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:14.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:14.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:30:14.080 00:30:14.080 --- 10.0.0.2 ping statistics --- 00:30:14.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.080 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:30:14.080 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:14.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:14.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:30:14.340 00:30:14.340 --- 10.0.0.1 ping statistics --- 00:30:14.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.340 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1670091 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1670091 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1670091 ']' 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:14.340 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:14.340 [2024-11-17 14:40:03.403640] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:14.340 [2024-11-17 14:40:03.404633] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:30:14.340 [2024-11-17 14:40:03.404671] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.340 [2024-11-17 14:40:03.485546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.340 [2024-11-17 14:40:03.526754] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.340 [2024-11-17 14:40:03.526788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.340 [2024-11-17 14:40:03.526795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.340 [2024-11-17 14:40:03.526801] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.340 [2024-11-17 14:40:03.526806] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.340 [2024-11-17 14:40:03.527326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.600 [2024-11-17 14:40:03.593074] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:14.600 [2024-11-17 14:40:03.593287] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:14.600 [2024-11-17 14:40:03.659977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:14.600 Malloc0 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:14.600 [2024-11-17 14:40:03.732118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1670119 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1670119 /var/tmp/bdevperf.sock 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1670119 ']' 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:14.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:14.600 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:14.600 [2024-11-17 14:40:03.784444] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:30:14.600 [2024-11-17 14:40:03.784485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1670119 ] 00:30:14.860 [2024-11-17 14:40:03.859536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.860 [2024-11-17 14:40:03.902071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.860 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:14.860 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:14.860 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:14.860 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.860 14:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:14.860 NVMe0n1 00:30:14.860 14:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.860 14:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:15.119 Running I/O for 10 seconds... 00:30:16.993 11406.00 IOPS, 44.55 MiB/s [2024-11-17T13:40:07.598Z] 11776.00 IOPS, 46.00 MiB/s [2024-11-17T13:40:08.537Z] 11946.67 IOPS, 46.67 MiB/s [2024-11-17T13:40:09.471Z] 12030.00 IOPS, 46.99 MiB/s [2024-11-17T13:40:10.407Z] 12067.80 IOPS, 47.14 MiB/s [2024-11-17T13:40:11.343Z] 12085.17 IOPS, 47.21 MiB/s [2024-11-17T13:40:12.279Z] 12099.29 IOPS, 47.26 MiB/s [2024-11-17T13:40:13.215Z] 12106.25 IOPS, 47.29 MiB/s [2024-11-17T13:40:14.591Z] 12144.33 IOPS, 47.44 MiB/s [2024-11-17T13:40:14.591Z] 12168.00 IOPS, 47.53 MiB/s 00:30:25.366 Latency(us) 00:30:25.366 [2024-11-17T13:40:14.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.366 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:25.366 Verification LBA range: start 0x0 length 0x4000 00:30:25.366 NVMe0n1 : 10.06 12181.10 47.58 0.00 0.00 83758.49 19717.79 55164.22 00:30:25.366 [2024-11-17T13:40:14.591Z] =================================================================================================================== 00:30:25.366 [2024-11-17T13:40:14.591Z] Total : 12181.10 47.58 0.00 0.00 83758.49 19717.79 55164.22 00:30:25.366 { 00:30:25.366 "results": [ 00:30:25.366 { 00:30:25.366 "job": "NVMe0n1", 00:30:25.366 "core_mask": "0x1", 00:30:25.366 "workload": "verify", 00:30:25.366 "status": "finished", 00:30:25.366 "verify_range": { 00:30:25.366 "start": 0, 00:30:25.366 "length": 16384 00:30:25.366 }, 00:30:25.366 "queue_depth": 1024, 00:30:25.366 "io_size": 4096, 00:30:25.366 "runtime": 10.064037, 00:30:25.366 "iops": 12181.095916082184, 00:30:25.366 "mibps": 47.58240592219603, 00:30:25.366 "io_failed": 0, 00:30:25.366 "io_timeout": 0, 00:30:25.366 "avg_latency_us": 83758.48762717171, 00:30:25.366 "min_latency_us": 19717.787826086955, 00:30:25.366 "max_latency_us": 55164.215652173916 00:30:25.366 } 00:30:25.366 ], 00:30:25.366 "core_count": 1 00:30:25.366 } 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1670119 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1670119 ']' 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1670119 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1670119 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1670119' 00:30:25.366 killing process with pid 1670119 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1670119 00:30:25.366 Received shutdown signal, test time was about 10.000000 seconds 00:30:25.366 00:30:25.366 Latency(us) 00:30:25.366 [2024-11-17T13:40:14.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.366 [2024-11-17T13:40:14.591Z] =================================================================================================================== 00:30:25.366 [2024-11-17T13:40:14.591Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1670119 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:25.366 rmmod nvme_tcp 00:30:25.366 rmmod nvme_fabrics 00:30:25.366 rmmod nvme_keyring 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1670091 ']' 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1670091 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1670091 ']' 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1670091 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:25.366 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1670091 00:30:25.625 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:25.625 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:25.625 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1670091' 00:30:25.625 killing process with pid 1670091 00:30:25.625 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1670091 00:30:25.625 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1670091 00:30:25.625 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:25.625 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:25.625 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:25.626 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:25.626 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:25.626 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:25.626 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:25.626 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:25.626 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:25.626 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.626 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.626 14:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.161 14:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:28.161 00:30:28.161 real 0m19.624s 00:30:28.161 user 0m22.560s 00:30:28.161 sys 0m6.306s 00:30:28.161 14:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:28.161 14:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:28.161 ************************************ 00:30:28.161 END TEST nvmf_queue_depth 00:30:28.161 ************************************ 00:30:28.161 14:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:28.161 14:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:28.161 14:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:28.161 14:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:28.161 ************************************ 00:30:28.161 START TEST nvmf_target_multipath 00:30:28.161 ************************************ 00:30:28.161 14:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:28.161 * Looking for test storage... 00:30:28.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:28.161 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:28.161 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:30:28.161 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:28.161 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:28.161 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:28.161 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:28.161 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:28.161 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:28.161 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:28.161 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:28.161 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:28.161 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:28.161 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:28.161 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:28.161 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:28.161 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:28.161 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:28.161 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:28.161 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:28.161 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:28.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.162 --rc genhtml_branch_coverage=1 00:30:28.162 --rc genhtml_function_coverage=1 00:30:28.162 --rc genhtml_legend=1 00:30:28.162 --rc geninfo_all_blocks=1 00:30:28.162 --rc geninfo_unexecuted_blocks=1 00:30:28.162 00:30:28.162 ' 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:28.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.162 --rc genhtml_branch_coverage=1 00:30:28.162 --rc genhtml_function_coverage=1 00:30:28.162 --rc genhtml_legend=1 00:30:28.162 --rc geninfo_all_blocks=1 00:30:28.162 --rc geninfo_unexecuted_blocks=1 00:30:28.162 00:30:28.162 ' 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:28.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.162 --rc genhtml_branch_coverage=1 00:30:28.162 --rc genhtml_function_coverage=1 00:30:28.162 --rc genhtml_legend=1 00:30:28.162 --rc geninfo_all_blocks=1 00:30:28.162 --rc geninfo_unexecuted_blocks=1 00:30:28.162 00:30:28.162 ' 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:28.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.162 --rc genhtml_branch_coverage=1 00:30:28.162 --rc genhtml_function_coverage=1 00:30:28.162 --rc genhtml_legend=1 00:30:28.162 --rc geninfo_all_blocks=1 00:30:28.162 --rc geninfo_unexecuted_blocks=1 00:30:28.162 00:30:28.162 ' 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:28.162 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.163 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:28.163 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.163 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:28.163 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:28.163 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:28.163 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:34.733 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:34.733 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:34.733 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:34.734 Found net devices under 0000:86:00.0: cvl_0_0 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:34.734 Found net devices under 0000:86:00.1: cvl_0_1 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:34.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:34.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:30:34.734 00:30:34.734 --- 10.0.0.2 ping statistics --- 00:30:34.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.734 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:34.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:34.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:30:34.734 00:30:34.734 --- 10.0.0.1 ping statistics --- 00:30:34.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.734 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:34.734 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:34.734 only one NIC for nvmf test 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:34.734 rmmod nvme_tcp 00:30:34.734 rmmod nvme_fabrics 00:30:34.734 rmmod nvme_keyring 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:34.734 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.113 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:36.113 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:36.114 00:30:36.114 real 0m8.280s 00:30:36.114 user 0m1.745s 00:30:36.114 sys 0m4.566s 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:36.114 ************************************ 00:30:36.114 END TEST nvmf_target_multipath 00:30:36.114 ************************************ 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:36.114 ************************************ 00:30:36.114 START TEST nvmf_zcopy 00:30:36.114 ************************************ 00:30:36.114 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:36.375 * Looking for test storage... 00:30:36.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:36.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.375 --rc genhtml_branch_coverage=1 00:30:36.375 --rc genhtml_function_coverage=1 00:30:36.375 --rc genhtml_legend=1 00:30:36.375 --rc geninfo_all_blocks=1 00:30:36.375 --rc geninfo_unexecuted_blocks=1 00:30:36.375 00:30:36.375 ' 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:36.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.375 --rc genhtml_branch_coverage=1 00:30:36.375 --rc genhtml_function_coverage=1 00:30:36.375 --rc genhtml_legend=1 00:30:36.375 --rc geninfo_all_blocks=1 00:30:36.375 --rc geninfo_unexecuted_blocks=1 00:30:36.375 00:30:36.375 ' 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:36.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.375 --rc genhtml_branch_coverage=1 00:30:36.375 --rc genhtml_function_coverage=1 00:30:36.375 --rc genhtml_legend=1 00:30:36.375 --rc geninfo_all_blocks=1 00:30:36.375 --rc geninfo_unexecuted_blocks=1 00:30:36.375 00:30:36.375 ' 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:36.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.375 --rc genhtml_branch_coverage=1 00:30:36.375 --rc genhtml_function_coverage=1 00:30:36.375 --rc genhtml_legend=1 00:30:36.375 --rc geninfo_all_blocks=1 00:30:36.375 --rc geninfo_unexecuted_blocks=1 00:30:36.375 00:30:36.375 ' 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:36.375 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:36.376 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:42.954 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:42.954 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.954 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:42.955 Found net devices under 0000:86:00.0: cvl_0_0 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:42.955 Found net devices under 0000:86:00.1: cvl_0_1 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:42.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:30:42.955 00:30:42.955 --- 10.0.0.2 ping statistics --- 00:30:42.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.955 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:30:42.955 00:30:42.955 --- 10.0.0.1 ping statistics --- 00:30:42.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.955 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1678775 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1678775 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1678775 ']' 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:42.955 [2024-11-17 14:40:31.493748] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:42.955 [2024-11-17 14:40:31.494665] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:30:42.955 [2024-11-17 14:40:31.494698] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.955 [2024-11-17 14:40:31.573619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.955 [2024-11-17 14:40:31.614106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:42.955 [2024-11-17 14:40:31.614142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:42.955 [2024-11-17 14:40:31.614149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:42.955 [2024-11-17 14:40:31.614155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:42.955 [2024-11-17 14:40:31.614160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:42.955 [2024-11-17 14:40:31.614720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.955 [2024-11-17 14:40:31.680314] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:42.955 [2024-11-17 14:40:31.680522] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:42.955 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:42.956 [2024-11-17 14:40:31.747404] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:42.956 [2024-11-17 14:40:31.775680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:42.956 malloc0 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:42.956 { 00:30:42.956 "params": { 00:30:42.956 "name": "Nvme$subsystem", 00:30:42.956 "trtype": "$TEST_TRANSPORT", 00:30:42.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.956 "adrfam": "ipv4", 00:30:42.956 "trsvcid": "$NVMF_PORT", 00:30:42.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.956 "hdgst": ${hdgst:-false}, 00:30:42.956 "ddgst": ${ddgst:-false} 00:30:42.956 }, 00:30:42.956 "method": "bdev_nvme_attach_controller" 00:30:42.956 } 00:30:42.956 EOF 00:30:42.956 )") 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:42.956 14:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:42.956 "params": { 00:30:42.956 "name": "Nvme1", 00:30:42.956 "trtype": "tcp", 00:30:42.956 "traddr": "10.0.0.2", 00:30:42.956 "adrfam": "ipv4", 00:30:42.956 "trsvcid": "4420", 00:30:42.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:42.956 "hdgst": false, 00:30:42.956 "ddgst": false 00:30:42.956 }, 00:30:42.956 "method": "bdev_nvme_attach_controller" 00:30:42.956 }' 00:30:42.956 [2024-11-17 14:40:31.868809] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:30:42.956 [2024-11-17 14:40:31.868858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1678802 ] 00:30:42.956 [2024-11-17 14:40:31.946474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.956 [2024-11-17 14:40:31.987714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.215 Running I/O for 10 seconds... 00:30:45.529 8296.00 IOPS, 64.81 MiB/s [2024-11-17T13:40:35.322Z] 8350.50 IOPS, 65.24 MiB/s [2024-11-17T13:40:36.724Z] 8358.33 IOPS, 65.30 MiB/s [2024-11-17T13:40:37.392Z] 8373.50 IOPS, 65.42 MiB/s [2024-11-17T13:40:38.331Z] 8370.80 IOPS, 65.40 MiB/s [2024-11-17T13:40:39.708Z] 8379.00 IOPS, 65.46 MiB/s [2024-11-17T13:40:40.642Z] 8383.71 IOPS, 65.50 MiB/s [2024-11-17T13:40:41.575Z] 8371.88 IOPS, 65.41 MiB/s [2024-11-17T13:40:42.513Z] 8369.78 IOPS, 65.39 MiB/s [2024-11-17T13:40:42.513Z] 8372.60 IOPS, 65.41 MiB/s 00:30:53.288 Latency(us) 00:30:53.288 [2024-11-17T13:40:42.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.288 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:53.288 Verification LBA range: start 0x0 length 0x1000 00:30:53.288 Nvme1n1 : 10.01 8376.01 65.44 0.00 0.00 15238.48 1495.93 21541.40 00:30:53.288 [2024-11-17T13:40:42.513Z] =================================================================================================================== 00:30:53.288 [2024-11-17T13:40:42.513Z] Total : 8376.01 65.44 0.00 0.00 15238.48 1495.93 21541.40 00:30:53.288 14:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1680591 00:30:53.288 14:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:53.288 14:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:53.288 14:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:53.288 14:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:53.288 14:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:53.288 14:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:53.288 14:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:53.288 14:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:53.288 { 00:30:53.288 "params": { 00:30:53.288 "name": "Nvme$subsystem", 00:30:53.288 "trtype": "$TEST_TRANSPORT", 00:30:53.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.288 "adrfam": "ipv4", 00:30:53.288 "trsvcid": "$NVMF_PORT", 00:30:53.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.288 "hdgst": ${hdgst:-false}, 00:30:53.288 "ddgst": ${ddgst:-false} 00:30:53.288 }, 00:30:53.288 "method": "bdev_nvme_attach_controller" 00:30:53.288 } 00:30:53.288 EOF 00:30:53.288 )") 00:30:53.288 [2024-11-17 14:40:42.503051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.288 [2024-11-17 14:40:42.503087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.288 14:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:53.288 14:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:53.548 14:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:53.548 14:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:53.548 "params": { 00:30:53.548 "name": "Nvme1", 00:30:53.548 "trtype": "tcp", 00:30:53.548 "traddr": "10.0.0.2", 00:30:53.548 "adrfam": "ipv4", 00:30:53.548 "trsvcid": "4420", 00:30:53.548 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:53.548 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:53.548 "hdgst": false, 00:30:53.548 "ddgst": false 00:30:53.548 }, 00:30:53.548 "method": "bdev_nvme_attach_controller" 00:30:53.548 }' 00:30:53.548 [2024-11-17 14:40:42.515021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.515033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.548 [2024-11-17 14:40:42.527015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.527025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.548 [2024-11-17 14:40:42.539016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.539026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.548 [2024-11-17 14:40:42.543829] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:30:53.548 [2024-11-17 14:40:42.543871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1680591 ] 00:30:53.548 [2024-11-17 14:40:42.551017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.551029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.548 [2024-11-17 14:40:42.563015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.563025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.548 [2024-11-17 14:40:42.575018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.575027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.548 [2024-11-17 14:40:42.587017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.587028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.548 [2024-11-17 14:40:42.599014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.599023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.548 [2024-11-17 14:40:42.611016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.611026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.548 [2024-11-17 14:40:42.618187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.548 [2024-11-17 14:40:42.623016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.623027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.548 [2024-11-17 14:40:42.635016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.635029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.548 [2024-11-17 14:40:42.647031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.647054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.548 [2024-11-17 14:40:42.659017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.659029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.548 [2024-11-17 14:40:42.660437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.548 [2024-11-17 14:40:42.671025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.671038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.548 [2024-11-17 14:40:42.683024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.683046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.548 [2024-11-17 14:40:42.695023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.695036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.548 [2024-11-17 14:40:42.707016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.707028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.548 [2024-11-17 14:40:42.719018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.719030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.548 [2024-11-17 14:40:42.731016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.731027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.548 [2024-11-17 14:40:42.743028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.743048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.548 [2024-11-17 14:40:42.755026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.755044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.548 [2024-11-17 14:40:42.767024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.548 [2024-11-17 14:40:42.767038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.808 [2024-11-17 14:40:42.779019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.808 [2024-11-17 14:40:42.779034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.808 [2024-11-17 14:40:42.791022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.808 [2024-11-17 14:40:42.791037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.808 [2024-11-17 14:40:42.838646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.808 [2024-11-17 14:40:42.838664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.808 [2024-11-17 14:40:42.847034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.808 [2024-11-17 14:40:42.847048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.808 Running I/O for 5 seconds... 00:30:53.808 [2024-11-17 14:40:42.861628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.808 [2024-11-17 14:40:42.861648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.808 [2024-11-17 14:40:42.876269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.808 [2024-11-17 14:40:42.876288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.808 [2024-11-17 14:40:42.891373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.808 [2024-11-17 14:40:42.891393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.808 [2024-11-17 14:40:42.906394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.808 [2024-11-17 14:40:42.906413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.808 [2024-11-17 14:40:42.920556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.808 [2024-11-17 14:40:42.920574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.808 [2024-11-17 14:40:42.935517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.808 [2024-11-17 14:40:42.935536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.808 [2024-11-17 14:40:42.951533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.808 [2024-11-17 14:40:42.951553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.808 [2024-11-17 14:40:42.963709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.808 [2024-11-17 14:40:42.963727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.808 [2024-11-17 14:40:42.979312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.808 [2024-11-17 14:40:42.979336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.808 [2024-11-17 14:40:42.994435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.808 [2024-11-17 14:40:42.994453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.808 [2024-11-17 14:40:43.008226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.808 [2024-11-17 14:40:43.008244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.808 [2024-11-17 14:40:43.019405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.808 [2024-11-17 14:40:43.019423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.067 [2024-11-17 14:40:43.033268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.067 [2024-11-17 14:40:43.033287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.067 [2024-11-17 14:40:43.048407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.067 [2024-11-17 14:40:43.048425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.067 [2024-11-17 14:40:43.063129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.067 [2024-11-17 14:40:43.063147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.067 [2024-11-17 14:40:43.076947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.067 [2024-11-17 14:40:43.076965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.067 [2024-11-17 14:40:43.091811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.067 [2024-11-17 14:40:43.091829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.067 [2024-11-17 14:40:43.106694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.067 [2024-11-17 14:40:43.106712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.068 [2024-11-17 14:40:43.120437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.068 [2024-11-17 14:40:43.120455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.068 [2024-11-17 14:40:43.135426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.068 [2024-11-17 14:40:43.135444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.068 [2024-11-17 14:40:43.150675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.068 [2024-11-17 14:40:43.150693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.068 [2024-11-17 14:40:43.163998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.068 [2024-11-17 14:40:43.164017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.068 [2024-11-17 14:40:43.176482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.068 [2024-11-17 14:40:43.176508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.068 [2024-11-17 14:40:43.191572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.068 [2024-11-17 14:40:43.191590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.068 [2024-11-17 14:40:43.207200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.068 [2024-11-17 14:40:43.207220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.068 [2024-11-17 14:40:43.220979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.068 [2024-11-17 14:40:43.220998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.068 [2024-11-17 14:40:43.236102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.068 [2024-11-17 14:40:43.236121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.068 [2024-11-17 14:40:43.250690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.068 [2024-11-17 14:40:43.250708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.068 [2024-11-17 14:40:43.264897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.068 [2024-11-17 14:40:43.264915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.068 [2024-11-17 14:40:43.280086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.068 [2024-11-17 14:40:43.280104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.327 [2024-11-17 14:40:43.294917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.327 [2024-11-17 14:40:43.294937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.327 [2024-11-17 14:40:43.306454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.327 [2024-11-17 14:40:43.306472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.327 [2024-11-17 14:40:43.320569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.327 [2024-11-17 14:40:43.320587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.327 [2024-11-17 14:40:43.335536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.327 [2024-11-17 14:40:43.335554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.327 [2024-11-17 14:40:43.347955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.327 [2024-11-17 14:40:43.347973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.327 [2024-11-17 14:40:43.360712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.327 [2024-11-17 14:40:43.360730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.327 [2024-11-17 14:40:43.375658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.327 [2024-11-17 14:40:43.375675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.327 [2024-11-17 14:40:43.391031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.327 [2024-11-17 14:40:43.391050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.327 [2024-11-17 14:40:43.402713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.327 [2024-11-17 14:40:43.402731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.327 [2024-11-17 14:40:43.417238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.327 [2024-11-17 14:40:43.417256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.327 [2024-11-17 14:40:43.432221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.327 [2024-11-17 14:40:43.432245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.327 [2024-11-17 14:40:43.447153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.327 [2024-11-17 14:40:43.447175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.327 [2024-11-17 14:40:43.459690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.327 [2024-11-17 14:40:43.459707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.327 [2024-11-17 14:40:43.474947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.327 [2024-11-17 14:40:43.474967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.327 [2024-11-17 14:40:43.487913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.327 [2024-11-17 14:40:43.487931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.328 [2024-11-17 14:40:43.503528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.328 [2024-11-17 14:40:43.503547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.328 [2024-11-17 14:40:43.515086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.328 [2024-11-17 14:40:43.515105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.328 [2024-11-17 14:40:43.529567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.328 [2024-11-17 14:40:43.529591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.328 [2024-11-17 14:40:43.544370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.328 [2024-11-17 14:40:43.544388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.588 [2024-11-17 14:40:43.559420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.588 [2024-11-17 14:40:43.559439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.588 [2024-11-17 14:40:43.570971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.588 [2024-11-17 14:40:43.570989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.588 [2024-11-17 14:40:43.584466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.588 [2024-11-17 14:40:43.584485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.588 [2024-11-17 14:40:43.599921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.588 [2024-11-17 14:40:43.599940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.588 [2024-11-17 14:40:43.614983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.588 [2024-11-17 14:40:43.615002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.588 [2024-11-17 14:40:43.628761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.588 [2024-11-17 14:40:43.628780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.588 [2024-11-17 14:40:43.644438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.588 [2024-11-17 14:40:43.644457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.588 [2024-11-17 14:40:43.659978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.588 [2024-11-17 14:40:43.659997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.588 [2024-11-17 14:40:43.675222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.588 [2024-11-17 14:40:43.675242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.588 [2024-11-17 14:40:43.687991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.588 [2024-11-17 14:40:43.688010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.588 [2024-11-17 14:40:43.703170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.588 [2024-11-17 14:40:43.703188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.588 [2024-11-17 14:40:43.714614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.588 [2024-11-17 14:40:43.714638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.588 [2024-11-17 14:40:43.729344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.588 [2024-11-17 14:40:43.729370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.588 [2024-11-17 14:40:43.744276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.588 [2024-11-17 14:40:43.744294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.588 [2024-11-17 14:40:43.759240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.588 [2024-11-17 14:40:43.759260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.588 [2024-11-17 14:40:43.773111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.588 [2024-11-17 14:40:43.773130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.588 [2024-11-17 14:40:43.787938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.588 [2024-11-17 14:40:43.787957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.588 [2024-11-17 14:40:43.800714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.588 [2024-11-17 14:40:43.800734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.848 [2024-11-17 14:40:43.816364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.848 [2024-11-17 14:40:43.816389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.848 [2024-11-17 14:40:43.831043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.848 [2024-11-17 14:40:43.831063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.848 [2024-11-17 14:40:43.845051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.848 [2024-11-17 14:40:43.845070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.848 16408.00 IOPS, 128.19 MiB/s [2024-11-17T13:40:44.073Z] [2024-11-17 14:40:43.859946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.848 [2024-11-17 14:40:43.859965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.848 [2024-11-17 14:40:43.874995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.848 [2024-11-17 14:40:43.875014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.848 [2024-11-17 14:40:43.887653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.848 [2024-11-17 14:40:43.887672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.848 [2024-11-17 14:40:43.903451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.848 [2024-11-17 14:40:43.903469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.848 [2024-11-17 14:40:43.914165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.848 [2024-11-17 14:40:43.914184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.848 [2024-11-17 14:40:43.929047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.848 [2024-11-17 14:40:43.929066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.848 [2024-11-17 14:40:43.944008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.848 [2024-11-17 14:40:43.944026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.848 [2024-11-17 14:40:43.959080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.848 [2024-11-17 14:40:43.959099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.848 [2024-11-17 14:40:43.972850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.848 [2024-11-17 14:40:43.972869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.848 [2024-11-17 14:40:43.987812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.848 [2024-11-17 14:40:43.987831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.848 [2024-11-17 14:40:44.002959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.848 [2024-11-17 14:40:44.002978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.848 [2024-11-17 14:40:44.017612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.848 [2024-11-17 14:40:44.017633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.848 [2024-11-17 14:40:44.032812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.848 [2024-11-17 14:40:44.032830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.848 [2024-11-17 14:40:44.047957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.848 [2024-11-17 14:40:44.047976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.848 [2024-11-17 14:40:44.062878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.848 [2024-11-17 14:40:44.062897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.108 [2024-11-17 14:40:44.076929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.108 [2024-11-17 14:40:44.076948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.108 [2024-11-17 14:40:44.091885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.108 [2024-11-17 14:40:44.091903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.108 [2024-11-17 14:40:44.106863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.108 [2024-11-17 14:40:44.106882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.108 [2024-11-17 14:40:44.119906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.108 [2024-11-17 14:40:44.119924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.108 [2024-11-17 14:40:44.134691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.108 [2024-11-17 14:40:44.134709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.108 [2024-11-17 14:40:44.147896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.108 [2024-11-17 14:40:44.147914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.108 [2024-11-17 14:40:44.160629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.108 [2024-11-17 14:40:44.160647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.108 [2024-11-17 14:40:44.175250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.108 [2024-11-17 14:40:44.175274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.108 [2024-11-17 14:40:44.185875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.108 [2024-11-17 14:40:44.185894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.108 [2024-11-17 14:40:44.200717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.108 [2024-11-17 14:40:44.200735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.108 [2024-11-17 14:40:44.215795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.108 [2024-11-17 14:40:44.215813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.108 [2024-11-17 14:40:44.230498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.108 [2024-11-17 14:40:44.230517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.108 [2024-11-17 14:40:44.241864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.108 [2024-11-17 14:40:44.241882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.108 [2024-11-17 14:40:44.257085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.109 [2024-11-17 14:40:44.257104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.109 [2024-11-17 14:40:44.271956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.109 [2024-11-17 14:40:44.271974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.109 [2024-11-17 14:40:44.286562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.109 [2024-11-17 14:40:44.286580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.109 [2024-11-17 14:40:44.300727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.109 [2024-11-17 14:40:44.300745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.109 [2024-11-17 14:40:44.315991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.109 [2024-11-17 14:40:44.316009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.369 [2024-11-17 14:40:44.331284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.369 [2024-11-17 14:40:44.331304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.369 [2024-11-17 14:40:44.342168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.369 [2024-11-17 14:40:44.342190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.369 [2024-11-17 14:40:44.356882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.369 [2024-11-17 14:40:44.356900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.369 [2024-11-17 14:40:44.371957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.369 [2024-11-17 14:40:44.371976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.369 [2024-11-17 14:40:44.386756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.369 [2024-11-17 14:40:44.386774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.369 [2024-11-17 14:40:44.399787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.369 [2024-11-17 14:40:44.399805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.369 [2024-11-17 14:40:44.415054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.369 [2024-11-17 14:40:44.415072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.369 [2024-11-17 14:40:44.425913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.369 [2024-11-17 14:40:44.425931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.369 [2024-11-17 14:40:44.440970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.369 [2024-11-17 14:40:44.440989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.369 [2024-11-17 14:40:44.456145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.369 [2024-11-17 14:40:44.456164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.369 [2024-11-17 14:40:44.471144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.369 [2024-11-17 14:40:44.471164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.369 [2024-11-17 14:40:44.482019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.369 [2024-11-17 14:40:44.482038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.369 [2024-11-17 14:40:44.497276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.369 [2024-11-17 14:40:44.497293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.369 [2024-11-17 14:40:44.512330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.369 [2024-11-17 14:40:44.512349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.369 [2024-11-17 14:40:44.527867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.369 [2024-11-17 14:40:44.527886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.369 [2024-11-17 14:40:44.542882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.369 [2024-11-17 14:40:44.542900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.369 [2024-11-17 14:40:44.554396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.369 [2024-11-17 14:40:44.554414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.369 [2024-11-17 14:40:44.569027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.369 [2024-11-17 14:40:44.569046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.369 [2024-11-17 14:40:44.583940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.369 [2024-11-17 14:40:44.583958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.629 [2024-11-17 14:40:44.599016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.629 [2024-11-17 14:40:44.599035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.629 [2024-11-17 14:40:44.611402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.629 [2024-11-17 14:40:44.611419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.629 [2024-11-17 14:40:44.624300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.629 [2024-11-17 14:40:44.624319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.629 [2024-11-17 14:40:44.639474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.629 [2024-11-17 14:40:44.639492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.629 [2024-11-17 14:40:44.650208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.629 [2024-11-17 14:40:44.650227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.629 [2024-11-17 14:40:44.664819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.629 [2024-11-17 14:40:44.664837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.629 [2024-11-17 14:40:44.679746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.629 [2024-11-17 14:40:44.679764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.629 [2024-11-17 14:40:44.691319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.629 [2024-11-17 14:40:44.691336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.629 [2024-11-17 14:40:44.704875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.629 [2024-11-17 14:40:44.704893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.629 [2024-11-17 14:40:44.720308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.629 [2024-11-17 14:40:44.720327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.629 [2024-11-17 14:40:44.735102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.629 [2024-11-17 14:40:44.735121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.629 [2024-11-17 14:40:44.748924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.629 [2024-11-17 14:40:44.748942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.629 [2024-11-17 14:40:44.763794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.629 [2024-11-17 14:40:44.763812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.629 [2024-11-17 14:40:44.779755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.629 [2024-11-17 14:40:44.779778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.629 [2024-11-17 14:40:44.795108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.629 [2024-11-17 14:40:44.795127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.629 [2024-11-17 14:40:44.806136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.629 [2024-11-17 14:40:44.806163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.629 [2024-11-17 14:40:44.821462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.629 [2024-11-17 14:40:44.821482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.629 [2024-11-17 14:40:44.836214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.629 [2024-11-17 14:40:44.836233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.888 [2024-11-17 14:40:44.851772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.888 [2024-11-17 14:40:44.851791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.888 16422.50 IOPS, 128.30 MiB/s [2024-11-17T13:40:45.113Z] [2024-11-17 14:40:44.866840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.888 [2024-11-17 14:40:44.866859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.888 [2024-11-17 14:40:44.880680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.888 [2024-11-17 14:40:44.880698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.888 [2024-11-17 14:40:44.895516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.888 [2024-11-17 14:40:44.895534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.888 [2024-11-17 14:40:44.911149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.888 [2024-11-17 14:40:44.911167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.888 [2024-11-17 14:40:44.922007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.888 [2024-11-17 14:40:44.922026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.888 [2024-11-17 14:40:44.936627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.888 [2024-11-17 14:40:44.936645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.888 [2024-11-17 14:40:44.951497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.888 [2024-11-17 14:40:44.951515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.888 [2024-11-17 14:40:44.967408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.888 [2024-11-17 14:40:44.967425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.888 [2024-11-17 14:40:44.979731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.888 [2024-11-17 14:40:44.979748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.888 [2024-11-17 14:40:44.993064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.888 [2024-11-17 14:40:44.993082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.888 [2024-11-17 14:40:45.008129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.888 [2024-11-17 14:40:45.008147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.888 [2024-11-17 14:40:45.023408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.888 [2024-11-17 14:40:45.023428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.888 [2024-11-17 14:40:45.038947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.888 [2024-11-17 14:40:45.038969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.888 [2024-11-17 14:40:45.053424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.888 [2024-11-17 14:40:45.053449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.888 [2024-11-17 14:40:45.068101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.889 [2024-11-17 14:40:45.068120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.889 [2024-11-17 14:40:45.083189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.889 [2024-11-17 14:40:45.083208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.889 [2024-11-17 14:40:45.094418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.889 [2024-11-17 14:40:45.094436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.148 [2024-11-17 14:40:45.109374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.148 [2024-11-17 14:40:45.109394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.148 [2024-11-17 14:40:45.124205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.148 [2024-11-17 14:40:45.124223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.148 [2024-11-17 14:40:45.139345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.148 [2024-11-17 14:40:45.139371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.148 [2024-11-17 14:40:45.152065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.148 [2024-11-17 14:40:45.152083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.148 [2024-11-17 14:40:45.164563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.148 [2024-11-17 14:40:45.164581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.148 [2024-11-17 14:40:45.179999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.148 [2024-11-17 14:40:45.180017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.148 [2024-11-17 14:40:45.194525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.148 [2024-11-17 14:40:45.194544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.148 [2024-11-17 14:40:45.208043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.148 [2024-11-17 14:40:45.208061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.148 [2024-11-17 14:40:45.223153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.148 [2024-11-17 14:40:45.223172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.148 [2024-11-17 14:40:45.234646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.148 [2024-11-17 14:40:45.234665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.148 [2024-11-17 14:40:45.248893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.148 [2024-11-17 14:40:45.248912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.148 [2024-11-17 14:40:45.263571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.148 [2024-11-17 14:40:45.263589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.148 [2024-11-17 14:40:45.274924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.148 [2024-11-17 14:40:45.274943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.148 [2024-11-17 14:40:45.289317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.148 [2024-11-17 14:40:45.289336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.148 [2024-11-17 14:40:45.304498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.148 [2024-11-17 14:40:45.304516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.148 [2024-11-17 14:40:45.319687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.148 [2024-11-17 14:40:45.319710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.148 [2024-11-17 14:40:45.334634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.148 [2024-11-17 14:40:45.334652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.148 [2024-11-17 14:40:45.348506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.148 [2024-11-17 14:40:45.348525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.148 [2024-11-17 14:40:45.363091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.148 [2024-11-17 14:40:45.363110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.407 [2024-11-17 14:40:45.374590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.407 [2024-11-17 14:40:45.374609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.407 [2024-11-17 14:40:45.388590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.407 [2024-11-17 14:40:45.388608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.407 [2024-11-17 14:40:45.403869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.407 [2024-11-17 14:40:45.403887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.407 [2024-11-17 14:40:45.419954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.407 [2024-11-17 14:40:45.419972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.407 [2024-11-17 14:40:45.435324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.407 [2024-11-17 14:40:45.435342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.407 [2024-11-17 14:40:45.450969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.407 [2024-11-17 14:40:45.450987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.407 [2024-11-17 14:40:45.464765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.407 [2024-11-17 14:40:45.464783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.407 [2024-11-17 14:40:45.479992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.407 [2024-11-17 14:40:45.480010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.407 [2024-11-17 14:40:45.490619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.407 [2024-11-17 14:40:45.490637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.407 [2024-11-17 14:40:45.504912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.407 [2024-11-17 14:40:45.504930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.407 [2024-11-17 14:40:45.519954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.407 [2024-11-17 14:40:45.519973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.407 [2024-11-17 14:40:45.535536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.407 [2024-11-17 14:40:45.535554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.407 [2024-11-17 14:40:45.548035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.407 [2024-11-17 14:40:45.548053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.407 [2024-11-17 14:40:45.560730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.407 [2024-11-17 14:40:45.560748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.407 [2024-11-17 14:40:45.575925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.407 [2024-11-17 14:40:45.575943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.407 [2024-11-17 14:40:45.590918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.407 [2024-11-17 14:40:45.590936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.407 [2024-11-17 14:40:45.602287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.407 [2024-11-17 14:40:45.602305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.407 [2024-11-17 14:40:45.616999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.407 [2024-11-17 14:40:45.617016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.668 [2024-11-17 14:40:45.632203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.668 [2024-11-17 14:40:45.632222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.668 [2024-11-17 14:40:45.647259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.668 [2024-11-17 14:40:45.647277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.668 [2024-11-17 14:40:45.658776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.668 [2024-11-17 14:40:45.658794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.668 [2024-11-17 14:40:45.672897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.668 [2024-11-17 14:40:45.672916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.668 [2024-11-17 14:40:45.687945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.668 [2024-11-17 14:40:45.687963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.668 [2024-11-17 14:40:45.702472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.668 [2024-11-17 14:40:45.702490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.668 [2024-11-17 14:40:45.717039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.668 [2024-11-17 14:40:45.717058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.668 [2024-11-17 14:40:45.732398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.668 [2024-11-17 14:40:45.732417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.668 [2024-11-17 14:40:45.747682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.668 [2024-11-17 14:40:45.747700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.668 [2024-11-17 14:40:45.763136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.668 [2024-11-17 14:40:45.763154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.668 [2024-11-17 14:40:45.775626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.668 [2024-11-17 14:40:45.775644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.668 [2024-11-17 14:40:45.790842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.668 [2024-11-17 14:40:45.790859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.668 [2024-11-17 14:40:45.802420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.668 [2024-11-17 14:40:45.802438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.668 [2024-11-17 14:40:45.816791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.668 [2024-11-17 14:40:45.816811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.668 [2024-11-17 14:40:45.832051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.668 [2024-11-17 14:40:45.832071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.668 [2024-11-17 14:40:45.842754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.668 [2024-11-17 14:40:45.842773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.668 [2024-11-17 14:40:45.857236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.668 [2024-11-17 14:40:45.857255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.668 16410.00 IOPS, 128.20 MiB/s [2024-11-17T13:40:45.893Z] [2024-11-17 14:40:45.872717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.668 [2024-11-17 14:40:45.872735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.668 [2024-11-17 14:40:45.887781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.668 [2024-11-17 14:40:45.887799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.928 [2024-11-17 14:40:45.903013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.928 [2024-11-17 14:40:45.903033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.928 [2024-11-17 14:40:45.914440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.928 [2024-11-17 14:40:45.914459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.928 [2024-11-17 14:40:45.929236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.928 [2024-11-17 14:40:45.929254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.928 [2024-11-17 14:40:45.944132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.928 [2024-11-17 14:40:45.944151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.928 [2024-11-17 14:40:45.959025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.928 [2024-11-17 14:40:45.959043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.928 [2024-11-17 14:40:45.972993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.928 [2024-11-17 14:40:45.973012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.928 [2024-11-17 14:40:45.987972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.928 [2024-11-17 14:40:45.987990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.928 [2024-11-17 14:40:46.002806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.928 [2024-11-17 14:40:46.002824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.928 [2024-11-17 14:40:46.015466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.928 [2024-11-17 14:40:46.015484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.928 [2024-11-17 14:40:46.030452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.928 [2024-11-17 14:40:46.030469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.928 [2024-11-17 14:40:46.043727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.928 [2024-11-17 14:40:46.043744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.928 [2024-11-17 14:40:46.056208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.928 [2024-11-17 14:40:46.056226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.928 [2024-11-17 14:40:46.071448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.928 [2024-11-17 14:40:46.071466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.928 [2024-11-17 14:40:46.086762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.928 [2024-11-17 14:40:46.086781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.928 [2024-11-17 14:40:46.099754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.928 [2024-11-17 14:40:46.099772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.928 [2024-11-17 14:40:46.112688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.928 [2024-11-17 14:40:46.112706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.928 [2024-11-17 14:40:46.128241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.928 [2024-11-17 14:40:46.128259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.928 [2024-11-17 14:40:46.143484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.928 [2024-11-17 14:40:46.143503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.188 [2024-11-17 14:40:46.155964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.188 [2024-11-17 14:40:46.155982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.188 [2024-11-17 14:40:46.171397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.188 [2024-11-17 14:40:46.171415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.188 [2024-11-17 14:40:46.183415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.188 [2024-11-17 14:40:46.183433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.188 [2024-11-17 14:40:46.196777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.188 [2024-11-17 14:40:46.196795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.188 [2024-11-17 14:40:46.211835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.188 [2024-11-17 14:40:46.211853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.188 [2024-11-17 14:40:46.223317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.188 [2024-11-17 14:40:46.223334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.188 [2024-11-17 14:40:46.239211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.188 [2024-11-17 14:40:46.239229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.188 [2024-11-17 14:40:46.251244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.188 [2024-11-17 14:40:46.251262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.188 [2024-11-17 14:40:46.265033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.188 [2024-11-17 14:40:46.265052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.188 [2024-11-17 14:40:46.280254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.188 [2024-11-17 14:40:46.280272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.188 [2024-11-17 14:40:46.294913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.188 [2024-11-17 14:40:46.294932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.188 [2024-11-17 14:40:46.306172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.188 [2024-11-17 14:40:46.306190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.188 [2024-11-17 14:40:46.320807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.188 [2024-11-17 14:40:46.320826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.188 [2024-11-17 14:40:46.336046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.188 [2024-11-17 14:40:46.336064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.188 [2024-11-17 14:40:46.351293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.188 [2024-11-17 14:40:46.351311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.188 [2024-11-17 14:40:46.363169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.188 [2024-11-17 14:40:46.363188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.188 [2024-11-17 14:40:46.376830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.188 [2024-11-17 14:40:46.376853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.188 [2024-11-17 14:40:46.391631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.188 [2024-11-17 14:40:46.391649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.188 [2024-11-17 14:40:46.407524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.188 [2024-11-17 14:40:46.407543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.448 [2024-11-17 14:40:46.423135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.448 [2024-11-17 14:40:46.423154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.448 [2024-11-17 14:40:46.435844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.448 [2024-11-17 14:40:46.435862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.448 [2024-11-17 14:40:46.451437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.448 [2024-11-17 14:40:46.451455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.448 [2024-11-17 14:40:46.466765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.448 [2024-11-17 14:40:46.466784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.448 [2024-11-17 14:40:46.481458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.448 [2024-11-17 14:40:46.481478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.448 [2024-11-17 14:40:46.496803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.448 [2024-11-17 14:40:46.496823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.448 [2024-11-17 14:40:46.511806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.448 [2024-11-17 14:40:46.511826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.448 [2024-11-17 14:40:46.527577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.448 [2024-11-17 14:40:46.527595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.448 [2024-11-17 14:40:46.543443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.448 [2024-11-17 14:40:46.543461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.448 [2024-11-17 14:40:46.555431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.448 [2024-11-17 14:40:46.555449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.448 [2024-11-17 14:40:46.569014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.448 [2024-11-17 14:40:46.569032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.448 [2024-11-17 14:40:46.584217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.448 [2024-11-17 14:40:46.584235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.448 [2024-11-17 14:40:46.599502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.448 [2024-11-17 14:40:46.599520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.448 [2024-11-17 14:40:46.610265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.448 [2024-11-17 14:40:46.610285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.448 [2024-11-17 14:40:46.625436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.448 [2024-11-17 14:40:46.625455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.448 [2024-11-17 14:40:46.640686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.448 [2024-11-17 14:40:46.640705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.448 [2024-11-17 14:40:46.655951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.448 [2024-11-17 14:40:46.655975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.708 [2024-11-17 14:40:46.671730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.708 [2024-11-17 14:40:46.671750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.708 [2024-11-17 14:40:46.686981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.708 [2024-11-17 14:40:46.687000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.708 [2024-11-17 14:40:46.697726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.708 [2024-11-17 14:40:46.697744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.708 [2024-11-17 14:40:46.713074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.708 [2024-11-17 14:40:46.713093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.708 [2024-11-17 14:40:46.728634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.708 [2024-11-17 14:40:46.728653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.708 [2024-11-17 14:40:46.743836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.708 [2024-11-17 14:40:46.743854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.708 [2024-11-17 14:40:46.759461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.708 [2024-11-17 14:40:46.759479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.708 [2024-11-17 14:40:46.771565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.708 [2024-11-17 14:40:46.771584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.708 [2024-11-17 14:40:46.785158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.708 [2024-11-17 14:40:46.785177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.708 [2024-11-17 14:40:46.800391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.708 [2024-11-17 14:40:46.800410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.708 [2024-11-17 14:40:46.815938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.708 [2024-11-17 14:40:46.815957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.708 [2024-11-17 14:40:46.830881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.708 [2024-11-17 14:40:46.830901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.708 [2024-11-17 14:40:46.843806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.708 [2024-11-17 14:40:46.843825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.708 [2024-11-17 14:40:46.855699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.708 [2024-11-17 14:40:46.855718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.708 16400.00 IOPS, 128.12 MiB/s [2024-11-17T13:40:46.933Z] [2024-11-17 14:40:46.868649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.708 [2024-11-17 14:40:46.868668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.708 [2024-11-17 14:40:46.884163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.708 [2024-11-17 14:40:46.884182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.708 [2024-11-17 14:40:46.899870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.708 [2024-11-17 14:40:46.899889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.708 [2024-11-17 14:40:46.915239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.708 [2024-11-17 14:40:46.915259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.708 [2024-11-17 14:40:46.927903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.708 [2024-11-17 14:40:46.927926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.967 [2024-11-17 14:40:46.941157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.967 [2024-11-17 14:40:46.941176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.967 [2024-11-17 14:40:46.955883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.967 [2024-11-17 14:40:46.955901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.967 [2024-11-17 14:40:46.971446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.967 [2024-11-17 14:40:46.971464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.967 [2024-11-17 14:40:46.986920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.968 [2024-11-17 14:40:46.986939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.968 [2024-11-17 14:40:47.000640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.968 [2024-11-17 14:40:47.000657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.968 [2024-11-17 14:40:47.015512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.968 [2024-11-17 14:40:47.015532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.968 [2024-11-17 14:40:47.030752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.968 [2024-11-17 14:40:47.030771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.968 [2024-11-17 14:40:47.044890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.968 [2024-11-17 14:40:47.044909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.968 [2024-11-17 14:40:47.059826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.968 [2024-11-17 14:40:47.059844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.968 [2024-11-17 14:40:47.070682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.968 [2024-11-17 14:40:47.070700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.968 [2024-11-17 14:40:47.084690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.968 [2024-11-17 14:40:47.084709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.968 [2024-11-17 14:40:47.099753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.968 [2024-11-17 14:40:47.099771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.968 [2024-11-17 14:40:47.114902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.968 [2024-11-17 14:40:47.114920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.968 [2024-11-17 14:40:47.129069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.968 [2024-11-17 14:40:47.129088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.968 [2024-11-17 14:40:47.143996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.968 [2024-11-17 14:40:47.144013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.968 [2024-11-17 14:40:47.158567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.968 [2024-11-17 14:40:47.158585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.968 [2024-11-17 14:40:47.172081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.968 [2024-11-17 14:40:47.172099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.968 [2024-11-17 14:40:47.186672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.968 [2024-11-17 14:40:47.186691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.227 [2024-11-17 14:40:47.201295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.227 [2024-11-17 14:40:47.201314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.227 [2024-11-17 14:40:47.216444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.227 [2024-11-17 14:40:47.216463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.227 [2024-11-17 14:40:47.227510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.227 [2024-11-17 14:40:47.227528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.227 [2024-11-17 14:40:47.241013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.227 [2024-11-17 14:40:47.241032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.227 [2024-11-17 14:40:47.255772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.227 [2024-11-17 14:40:47.255791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.227 [2024-11-17 14:40:47.271599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.227 [2024-11-17 14:40:47.271617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.227 [2024-11-17 14:40:47.287244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.227 [2024-11-17 14:40:47.287263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.227 [2024-11-17 14:40:47.299830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.227 [2024-11-17 14:40:47.299848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.227 [2024-11-17 14:40:47.315261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.227 [2024-11-17 14:40:47.315279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.227 [2024-11-17 14:40:47.327718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.227 [2024-11-17 14:40:47.327737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.227 [2024-11-17 14:40:47.343525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.227 [2024-11-17 14:40:47.343543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.227 [2024-11-17 14:40:47.358552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.227 [2024-11-17 14:40:47.358571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.227 [2024-11-17 14:40:47.372608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.227 [2024-11-17 14:40:47.372627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.227 [2024-11-17 14:40:47.387550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.227 [2024-11-17 14:40:47.387568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.228 [2024-11-17 14:40:47.398164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.228 [2024-11-17 14:40:47.398182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.228 [2024-11-17 14:40:47.412673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.228 [2024-11-17 14:40:47.412691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.228 [2024-11-17 14:40:47.427702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.228 [2024-11-17 14:40:47.427720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.228 [2024-11-17 14:40:47.443400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.228 [2024-11-17 14:40:47.443418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.487 [2024-11-17 14:40:47.458618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.487 [2024-11-17 14:40:47.458636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.487 [2024-11-17 14:40:47.469968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.487 [2024-11-17 14:40:47.469987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.487 [2024-11-17 14:40:47.485474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.487 [2024-11-17 14:40:47.485493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.487 [2024-11-17 14:40:47.500528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.487 [2024-11-17 14:40:47.500546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.487 [2024-11-17 14:40:47.515447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.487 [2024-11-17 14:40:47.515465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.487 [2024-11-17 14:40:47.531310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.487 [2024-11-17 14:40:47.531327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.487 [2024-11-17 14:40:47.547746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.487 [2024-11-17 14:40:47.547765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.487 [2024-11-17 14:40:47.563086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.487 [2024-11-17 14:40:47.563104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.487 [2024-11-17 14:40:47.575058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.487 [2024-11-17 14:40:47.575077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.487 [2024-11-17 14:40:47.589294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.487 [2024-11-17 14:40:47.589312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.487 [2024-11-17 14:40:47.604385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.487 [2024-11-17 14:40:47.604403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.487 [2024-11-17 14:40:47.619571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.487 [2024-11-17 14:40:47.619589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.487 [2024-11-17 14:40:47.635222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.487 [2024-11-17 14:40:47.635240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.487 [2024-11-17 14:40:47.647039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.487 [2024-11-17 14:40:47.647057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.487 [2024-11-17 14:40:47.661480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.487 [2024-11-17 14:40:47.661497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.487 [2024-11-17 14:40:47.676324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.487 [2024-11-17 14:40:47.676342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.487 [2024-11-17 14:40:47.691597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.487 [2024-11-17 14:40:47.691616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.487 [2024-11-17 14:40:47.706974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.487 [2024-11-17 14:40:47.706993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.747 [2024-11-17 14:40:47.721125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.747 [2024-11-17 14:40:47.721144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.747 [2024-11-17 14:40:47.736407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.747 [2024-11-17 14:40:47.736425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.747 [2024-11-17 14:40:47.751663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.747 [2024-11-17 14:40:47.751681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.747 [2024-11-17 14:40:47.767313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.747 [2024-11-17 14:40:47.767331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.747 [2024-11-17 14:40:47.783158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.747 [2024-11-17 14:40:47.783177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.747 [2024-11-17 14:40:47.794049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.747 [2024-11-17 14:40:47.794068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.747 [2024-11-17 14:40:47.809179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.747 [2024-11-17 14:40:47.809198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.747 [2024-11-17 14:40:47.823708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.747 [2024-11-17 14:40:47.823726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.747 [2024-11-17 14:40:47.838384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.747 [2024-11-17 14:40:47.838418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.747 [2024-11-17 14:40:47.852237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.747 [2024-11-17 14:40:47.852255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.747 16410.40 IOPS, 128.21 MiB/s [2024-11-17T13:40:47.972Z] [2024-11-17 14:40:47.867442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.747 [2024-11-17 14:40:47.867460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.747 00:30:58.747 Latency(us) 00:30:58.747 [2024-11-17T13:40:47.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:58.747 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:58.747 Nvme1n1 : 5.01 16413.62 128.23 0.00 0.00 7791.14 2023.07 13563.10 00:30:58.747 [2024-11-17T13:40:47.972Z] =================================================================================================================== 00:30:58.747 [2024-11-17T13:40:47.972Z] Total : 16413.62 128.23 0.00 0.00 7791.14 2023.07 13563.10 00:30:58.747 [2024-11-17 14:40:47.879022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.747 [2024-11-17 14:40:47.879040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.747 [2024-11-17 14:40:47.891022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.747 [2024-11-17 14:40:47.891040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.747 [2024-11-17 14:40:47.903030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.747 [2024-11-17 14:40:47.903050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.747 [2024-11-17 14:40:47.915021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.747 [2024-11-17 14:40:47.915037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.747 [2024-11-17 14:40:47.927025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.747 [2024-11-17 14:40:47.927040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.747 [2024-11-17 14:40:47.939018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.747 [2024-11-17 14:40:47.939034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.747 [2024-11-17 14:40:47.951020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.747 [2024-11-17 14:40:47.951042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.747 [2024-11-17 14:40:47.963019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.747 [2024-11-17 14:40:47.963033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.007 [2024-11-17 14:40:47.975018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.007 [2024-11-17 14:40:47.975033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.008 [2024-11-17 14:40:47.987016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.008 [2024-11-17 14:40:47.987026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.008 [2024-11-17 14:40:47.999022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.008 [2024-11-17 14:40:47.999038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.008 [2024-11-17 14:40:48.011016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.008 [2024-11-17 14:40:48.011030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.008 [2024-11-17 14:40:48.023017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.008 [2024-11-17 14:40:48.023028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1680591) - No such process 00:30:59.008 14:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1680591 00:30:59.008 14:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.008 14:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.008 14:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.008 14:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.008 14:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:59.008 14:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.008 14:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.008 delay0 00:30:59.008 14:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.008 14:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:59.008 14:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.008 14:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.008 14:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.008 14:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:30:59.008 [2024-11-17 14:40:48.128093] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:05.577 Initializing NVMe Controllers 00:31:05.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:05.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:05.577 Initialization complete. Launching workers. 00:31:05.577 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 273, failed: 17551 00:31:05.577 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 17728, failed to submit 96 00:31:05.577 success 17637, unsuccessful 91, failed 0 00:31:05.577 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:05.577 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:05.577 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:05.577 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:05.577 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:05.577 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:05.577 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:05.577 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:05.577 rmmod nvme_tcp 00:31:05.577 rmmod nvme_fabrics 00:31:05.836 rmmod nvme_keyring 00:31:05.836 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:05.836 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:05.836 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:05.836 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1678775 ']' 00:31:05.836 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1678775 00:31:05.836 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1678775 ']' 00:31:05.836 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1678775 00:31:05.836 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:05.836 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:05.836 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1678775 00:31:05.836 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:05.836 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:05.836 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1678775' 00:31:05.836 killing process with pid 1678775 00:31:05.836 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1678775 00:31:05.836 14:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1678775 00:31:05.836 14:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:05.836 14:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:05.836 14:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:05.836 14:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:05.836 14:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:05.836 14:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:05.836 14:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:05.836 14:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:05.836 14:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:05.836 14:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.837 14:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:05.837 14:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.378 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:08.378 00:31:08.378 real 0m31.841s 00:31:08.378 user 0m41.046s 00:31:08.378 sys 0m12.704s 00:31:08.378 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:08.379 ************************************ 00:31:08.379 END TEST nvmf_zcopy 00:31:08.379 ************************************ 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:08.379 ************************************ 00:31:08.379 START TEST nvmf_nmic 00:31:08.379 ************************************ 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:08.379 * Looking for test storage... 00:31:08.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:08.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.379 --rc genhtml_branch_coverage=1 00:31:08.379 --rc genhtml_function_coverage=1 00:31:08.379 --rc genhtml_legend=1 00:31:08.379 --rc geninfo_all_blocks=1 00:31:08.379 --rc geninfo_unexecuted_blocks=1 00:31:08.379 00:31:08.379 ' 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:08.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.379 --rc genhtml_branch_coverage=1 00:31:08.379 --rc genhtml_function_coverage=1 00:31:08.379 --rc genhtml_legend=1 00:31:08.379 --rc geninfo_all_blocks=1 00:31:08.379 --rc geninfo_unexecuted_blocks=1 00:31:08.379 00:31:08.379 ' 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:08.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.379 --rc genhtml_branch_coverage=1 00:31:08.379 --rc genhtml_function_coverage=1 00:31:08.379 --rc genhtml_legend=1 00:31:08.379 --rc geninfo_all_blocks=1 00:31:08.379 --rc geninfo_unexecuted_blocks=1 00:31:08.379 00:31:08.379 ' 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:08.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.379 --rc genhtml_branch_coverage=1 00:31:08.379 --rc genhtml_function_coverage=1 00:31:08.379 --rc genhtml_legend=1 00:31:08.379 --rc geninfo_all_blocks=1 00:31:08.379 --rc geninfo_unexecuted_blocks=1 00:31:08.379 00:31:08.379 ' 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.379 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:08.380 14:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:14.950 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:14.950 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:14.950 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:14.951 Found net devices under 0000:86:00.0: cvl_0_0 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:14.951 Found net devices under 0000:86:00.1: cvl_0_1 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:14.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:14.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:31:14.951 00:31:14.951 --- 10.0.0.2 ping statistics --- 00:31:14.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.951 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:14.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:14.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:31:14.951 00:31:14.951 --- 10.0.0.1 ping statistics --- 00:31:14.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.951 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1685985 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1685985 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1685985 ']' 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:14.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:14.951 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:14.951 [2024-11-17 14:41:03.378236] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:14.951 [2024-11-17 14:41:03.379168] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:31:14.951 [2024-11-17 14:41:03.379200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:14.951 [2024-11-17 14:41:03.458762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:14.951 [2024-11-17 14:41:03.502584] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:14.951 [2024-11-17 14:41:03.502622] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:14.951 [2024-11-17 14:41:03.502630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:14.951 [2024-11-17 14:41:03.502635] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:14.951 [2024-11-17 14:41:03.502640] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:14.951 [2024-11-17 14:41:03.504091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:14.951 [2024-11-17 14:41:03.504125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:14.951 [2024-11-17 14:41:03.504233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.951 [2024-11-17 14:41:03.504234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:14.951 [2024-11-17 14:41:03.571326] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:14.951 [2024-11-17 14:41:03.572111] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:14.951 [2024-11-17 14:41:03.572332] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:14.952 [2024-11-17 14:41:03.572682] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:14.952 [2024-11-17 14:41:03.572734] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:14.952 [2024-11-17 14:41:03.641066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:14.952 Malloc0 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:14.952 [2024-11-17 14:41:03.721283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:14.952 test case1: single bdev can't be used in multiple subsystems 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:14.952 [2024-11-17 14:41:03.752762] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:14.952 [2024-11-17 14:41:03.752781] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:14.952 [2024-11-17 14:41:03.752788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.952 request: 00:31:14.952 { 00:31:14.952 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:14.952 "namespace": { 00:31:14.952 "bdev_name": "Malloc0", 00:31:14.952 "no_auto_visible": false 00:31:14.952 }, 00:31:14.952 "method": "nvmf_subsystem_add_ns", 00:31:14.952 "req_id": 1 00:31:14.952 } 00:31:14.952 Got JSON-RPC error response 00:31:14.952 response: 00:31:14.952 { 00:31:14.952 "code": -32602, 00:31:14.952 "message": "Invalid parameters" 00:31:14.952 } 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:14.952 Adding namespace failed - expected result. 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:14.952 test case2: host connect to nvmf target in multiple paths 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:14.952 [2024-11-17 14:41:03.764836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:14.952 14:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:14.952 14:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:14.952 14:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:14.952 14:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:14.952 14:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:14.952 14:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:17.489 14:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:17.489 14:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:17.489 14:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:17.489 14:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:17.489 14:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:17.489 14:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:17.489 14:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:17.489 [global] 00:31:17.489 thread=1 00:31:17.489 invalidate=1 00:31:17.489 rw=write 00:31:17.489 time_based=1 00:31:17.489 runtime=1 00:31:17.489 ioengine=libaio 00:31:17.489 direct=1 00:31:17.489 bs=4096 00:31:17.489 iodepth=1 00:31:17.489 norandommap=0 00:31:17.489 numjobs=1 00:31:17.489 00:31:17.489 verify_dump=1 00:31:17.489 verify_backlog=512 00:31:17.489 verify_state_save=0 00:31:17.489 do_verify=1 00:31:17.489 verify=crc32c-intel 00:31:17.489 [job0] 00:31:17.489 filename=/dev/nvme0n1 00:31:17.489 Could not set queue depth (nvme0n1) 00:31:17.489 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:17.489 fio-3.35 00:31:17.489 Starting 1 thread 00:31:18.868 00:31:18.868 job0: (groupid=0, jobs=1): err= 0: pid=1686601: Sun Nov 17 14:41:07 2024 00:31:18.868 read: IOPS=2681, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec) 00:31:18.868 slat (nsec): min=6295, max=21535, avg=7218.50, stdev=762.32 00:31:18.868 clat (usec): min=159, max=286, avg=188.82, stdev=14.17 00:31:18.868 lat (usec): min=166, max=293, avg=196.04, stdev=14.19 00:31:18.868 clat percentiles (usec): 00:31:18.868 | 1.00th=[ 178], 5.00th=[ 180], 10.00th=[ 180], 20.00th=[ 182], 00:31:18.868 | 30.00th=[ 184], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 188], 00:31:18.868 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 196], 95.00th=[ 217], 00:31:18.868 | 99.00th=[ 251], 99.50th=[ 255], 99.90th=[ 260], 99.95th=[ 262], 00:31:18.868 | 99.99th=[ 285] 00:31:18.868 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:31:18.868 slat (usec): min=9, max=27357, avg=19.19, stdev=493.40 00:31:18.868 clat (usec): min=121, max=363, avg=130.98, stdev= 8.22 00:31:18.868 lat (usec): min=131, max=27547, avg=150.17, stdev=494.54 00:31:18.868 clat percentiles (usec): 00:31:18.868 | 1.00th=[ 124], 5.00th=[ 126], 10.00th=[ 127], 20.00th=[ 128], 00:31:18.868 | 30.00th=[ 129], 40.00th=[ 130], 50.00th=[ 130], 60.00th=[ 131], 00:31:18.868 | 70.00th=[ 133], 80.00th=[ 135], 90.00th=[ 137], 95.00th=[ 139], 00:31:18.868 | 99.00th=[ 153], 99.50th=[ 176], 99.90th=[ 215], 99.95th=[ 338], 00:31:18.868 | 99.99th=[ 363] 00:31:18.868 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:31:18.868 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:18.868 lat (usec) : 250=99.41%, 500=0.59% 00:31:18.868 cpu : usr=3.80%, sys=4.30%, ctx=5760, majf=0, minf=1 00:31:18.868 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:18.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.868 issued rwts: total=2684,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.868 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:18.868 00:31:18.868 Run status group 0 (all jobs): 00:31:18.868 READ: bw=10.5MiB/s (11.0MB/s), 10.5MiB/s-10.5MiB/s (11.0MB/s-11.0MB/s), io=10.5MiB (11.0MB), run=1001-1001msec 00:31:18.868 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:31:18.868 00:31:18.868 Disk stats (read/write): 00:31:18.868 nvme0n1: ios=2574/2560, merge=0/0, ticks=1452/331, in_queue=1783, util=98.60% 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:18.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:18.868 rmmod nvme_tcp 00:31:18.868 rmmod nvme_fabrics 00:31:18.868 rmmod nvme_keyring 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1685985 ']' 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1685985 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1685985 ']' 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1685985 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1685985 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1685985' 00:31:18.868 killing process with pid 1685985 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1685985 00:31:18.868 14:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1685985 00:31:19.128 14:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:19.128 14:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:19.128 14:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:19.128 14:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:19.128 14:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:19.128 14:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:19.128 14:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:19.128 14:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:19.128 14:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:19.128 14:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.128 14:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:19.128 14:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.032 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:21.032 00:31:21.032 real 0m13.027s 00:31:21.032 user 0m23.474s 00:31:21.032 sys 0m6.191s 00:31:21.032 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:21.032 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.032 ************************************ 00:31:21.032 END TEST nvmf_nmic 00:31:21.032 ************************************ 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:21.292 ************************************ 00:31:21.292 START TEST nvmf_fio_target 00:31:21.292 ************************************ 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:21.292 * Looking for test storage... 00:31:21.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:21.292 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:21.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.293 --rc genhtml_branch_coverage=1 00:31:21.293 --rc genhtml_function_coverage=1 00:31:21.293 --rc genhtml_legend=1 00:31:21.293 --rc geninfo_all_blocks=1 00:31:21.293 --rc geninfo_unexecuted_blocks=1 00:31:21.293 00:31:21.293 ' 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:21.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.293 --rc genhtml_branch_coverage=1 00:31:21.293 --rc genhtml_function_coverage=1 00:31:21.293 --rc genhtml_legend=1 00:31:21.293 --rc geninfo_all_blocks=1 00:31:21.293 --rc geninfo_unexecuted_blocks=1 00:31:21.293 00:31:21.293 ' 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:21.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.293 --rc genhtml_branch_coverage=1 00:31:21.293 --rc genhtml_function_coverage=1 00:31:21.293 --rc genhtml_legend=1 00:31:21.293 --rc geninfo_all_blocks=1 00:31:21.293 --rc geninfo_unexecuted_blocks=1 00:31:21.293 00:31:21.293 ' 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:21.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.293 --rc genhtml_branch_coverage=1 00:31:21.293 --rc genhtml_function_coverage=1 00:31:21.293 --rc genhtml_legend=1 00:31:21.293 --rc geninfo_all_blocks=1 00:31:21.293 --rc geninfo_unexecuted_blocks=1 00:31:21.293 00:31:21.293 ' 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:21.293 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:21.294 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:21.294 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:21.294 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:21.294 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:21.294 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.294 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:21.294 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.294 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:21.294 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:21.294 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:21.294 14:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:27.969 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:27.969 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:27.969 Found net devices under 0000:86:00.0: cvl_0_0 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:27.969 Found net devices under 0000:86:00.1: cvl_0_1 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:27.969 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:27.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:27.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:31:27.970 00:31:27.970 --- 10.0.0.2 ping statistics --- 00:31:27.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.970 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:27.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:27.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:31:27.970 00:31:27.970 --- 10.0.0.1 ping statistics --- 00:31:27.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.970 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1690356 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1690356 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1690356 ']' 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:27.970 [2024-11-17 14:41:16.508394] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:27.970 [2024-11-17 14:41:16.509419] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:31:27.970 [2024-11-17 14:41:16.509462] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:27.970 [2024-11-17 14:41:16.589403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:27.970 [2024-11-17 14:41:16.631398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:27.970 [2024-11-17 14:41:16.631438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:27.970 [2024-11-17 14:41:16.631445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:27.970 [2024-11-17 14:41:16.631452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:27.970 [2024-11-17 14:41:16.631457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:27.970 [2024-11-17 14:41:16.632940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.970 [2024-11-17 14:41:16.633262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:27.970 [2024-11-17 14:41:16.633346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.970 [2024-11-17 14:41:16.633347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:27.970 [2024-11-17 14:41:16.701485] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:27.970 [2024-11-17 14:41:16.702273] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:27.970 [2024-11-17 14:41:16.702516] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:27.970 [2024-11-17 14:41:16.702854] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:27.970 [2024-11-17 14:41:16.702899] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:27.970 14:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:27.970 [2024-11-17 14:41:16.954007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.970 14:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:28.229 14:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:28.229 14:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:28.229 14:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:28.229 14:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:28.488 14:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:28.488 14:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:28.746 14:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:28.746 14:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:29.006 14:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:29.265 14:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:29.265 14:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:29.524 14:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:29.524 14:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:29.524 14:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:29.524 14:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:29.783 14:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:30.042 14:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:30.042 14:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:30.042 14:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:30.042 14:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:30.300 14:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:30.559 [2024-11-17 14:41:19.617968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.559 14:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:30.819 14:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:31.079 14:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:31.079 14:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:31.079 14:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:31.079 14:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:31.079 14:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:31.079 14:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:31.079 14:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:33.617 14:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:33.617 14:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:33.617 14:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:33.617 14:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:33.617 14:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:33.617 14:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:33.617 14:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:33.617 [global] 00:31:33.617 thread=1 00:31:33.617 invalidate=1 00:31:33.617 rw=write 00:31:33.617 time_based=1 00:31:33.617 runtime=1 00:31:33.617 ioengine=libaio 00:31:33.617 direct=1 00:31:33.617 bs=4096 00:31:33.617 iodepth=1 00:31:33.617 norandommap=0 00:31:33.617 numjobs=1 00:31:33.617 00:31:33.617 verify_dump=1 00:31:33.617 verify_backlog=512 00:31:33.617 verify_state_save=0 00:31:33.617 do_verify=1 00:31:33.617 verify=crc32c-intel 00:31:33.617 [job0] 00:31:33.617 filename=/dev/nvme0n1 00:31:33.617 [job1] 00:31:33.617 filename=/dev/nvme0n2 00:31:33.617 [job2] 00:31:33.617 filename=/dev/nvme0n3 00:31:33.617 [job3] 00:31:33.617 filename=/dev/nvme0n4 00:31:33.617 Could not set queue depth (nvme0n1) 00:31:33.617 Could not set queue depth (nvme0n2) 00:31:33.617 Could not set queue depth (nvme0n3) 00:31:33.617 Could not set queue depth (nvme0n4) 00:31:33.617 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:33.617 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:33.617 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:33.617 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:33.617 fio-3.35 00:31:33.617 Starting 4 threads 00:31:34.996 00:31:34.996 job0: (groupid=0, jobs=1): err= 0: pid=1691479: Sun Nov 17 14:41:23 2024 00:31:34.996 read: IOPS=615, BW=2462KiB/s (2521kB/s)(2548KiB/1035msec) 00:31:34.996 slat (nsec): min=6454, max=29289, avg=7724.83, stdev=2589.05 00:31:34.996 clat (usec): min=176, max=41099, avg=1341.57, stdev=6763.75 00:31:34.996 lat (usec): min=184, max=41121, avg=1349.30, stdev=6766.07 00:31:34.996 clat percentiles (usec): 00:31:34.996 | 1.00th=[ 180], 5.00th=[ 182], 10.00th=[ 182], 20.00th=[ 184], 00:31:34.996 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:31:34.996 | 70.00th=[ 192], 80.00th=[ 194], 90.00th=[ 198], 95.00th=[ 210], 00:31:34.996 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:34.996 | 99.99th=[41157] 00:31:34.996 write: IOPS=989, BW=3957KiB/s (4052kB/s)(4096KiB/1035msec); 0 zone resets 00:31:34.996 slat (nsec): min=9624, max=44374, avg=11061.30, stdev=1780.75 00:31:34.996 clat (usec): min=124, max=350, avg=156.44, stdev=29.60 00:31:34.996 lat (usec): min=134, max=394, avg=167.50, stdev=30.27 00:31:34.996 clat percentiles (usec): 00:31:34.996 | 1.00th=[ 129], 5.00th=[ 130], 10.00th=[ 131], 20.00th=[ 133], 00:31:34.996 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 151], 60.00th=[ 161], 00:31:34.996 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 188], 95.00th=[ 239], 00:31:34.996 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 302], 99.95th=[ 351], 00:31:34.996 | 99.99th=[ 351] 00:31:34.996 bw ( KiB/s): min= 8192, max= 8192, per=69.00%, avg=8192.00, stdev= 0.00, samples=1 00:31:34.996 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:34.996 lat (usec) : 250=98.68%, 500=0.24% 00:31:34.996 lat (msec) : 50=1.08% 00:31:34.996 cpu : usr=0.87%, sys=1.55%, ctx=1663, majf=0, minf=1 00:31:34.996 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:34.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.996 issued rwts: total=637,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.996 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:34.996 job1: (groupid=0, jobs=1): err= 0: pid=1691480: Sun Nov 17 14:41:23 2024 00:31:34.996 read: IOPS=816, BW=3266KiB/s (3345kB/s)(3312KiB/1014msec) 00:31:34.996 slat (nsec): min=2222, max=29051, avg=5345.10, stdev=4007.17 00:31:34.996 clat (usec): min=158, max=43894, avg=997.71, stdev=5635.45 00:31:34.996 lat (usec): min=160, max=43923, avg=1003.05, stdev=5637.44 00:31:34.996 clat percentiles (usec): 00:31:34.996 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:31:34.996 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 206], 00:31:34.996 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 223], 95.00th=[ 239], 00:31:34.996 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:31:34.996 | 99.99th=[43779] 00:31:34.996 write: IOPS=1009, BW=4039KiB/s (4136kB/s)(4096KiB/1014msec); 0 zone resets 00:31:34.996 slat (nsec): min=6085, max=38775, avg=11722.02, stdev=2224.53 00:31:34.996 clat (usec): min=125, max=279, avg=162.81, stdev=19.93 00:31:34.996 lat (usec): min=132, max=294, avg=174.53, stdev=20.67 00:31:34.996 clat percentiles (usec): 00:31:34.996 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 143], 20.00th=[ 145], 00:31:34.996 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 161], 60.00th=[ 169], 00:31:34.996 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 196], 00:31:34.996 | 99.00th=[ 215], 99.50th=[ 233], 99.90th=[ 273], 99.95th=[ 281], 00:31:34.996 | 99.99th=[ 281] 00:31:34.996 bw ( KiB/s): min= 4096, max= 4096, per=34.50%, avg=4096.00, stdev= 0.00, samples=2 00:31:34.996 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:31:34.996 lat (usec) : 250=98.16%, 500=0.86%, 750=0.05% 00:31:34.996 lat (msec) : 10=0.05%, 50=0.86% 00:31:34.996 cpu : usr=1.58%, sys=1.97%, ctx=1853, majf=0, minf=1 00:31:34.996 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:34.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.996 issued rwts: total=828,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.996 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:34.996 job2: (groupid=0, jobs=1): err= 0: pid=1691481: Sun Nov 17 14:41:23 2024 00:31:34.996 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:31:34.996 slat (nsec): min=10074, max=26659, avg=22845.41, stdev=3029.02 00:31:34.996 clat (usec): min=40823, max=41365, avg=40982.41, stdev=100.02 00:31:34.996 lat (usec): min=40846, max=41375, avg=41005.25, stdev=97.71 00:31:34.996 clat percentiles (usec): 00:31:34.996 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:34.996 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:34.996 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:34.996 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:34.996 | 99.99th=[41157] 00:31:34.996 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:31:34.996 slat (nsec): min=10934, max=39142, avg=12341.43, stdev=1915.18 00:31:34.996 clat (usec): min=157, max=345, avg=178.78, stdev=13.05 00:31:34.996 lat (usec): min=168, max=384, avg=191.12, stdev=13.81 00:31:34.996 clat percentiles (usec): 00:31:34.996 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 165], 20.00th=[ 169], 00:31:34.996 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:31:34.996 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 200], 00:31:34.996 | 99.00th=[ 210], 99.50th=[ 215], 99.90th=[ 347], 99.95th=[ 347], 00:31:34.996 | 99.99th=[ 347] 00:31:34.996 bw ( KiB/s): min= 4096, max= 4096, per=34.50%, avg=4096.00, stdev= 0.00, samples=1 00:31:34.996 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:34.996 lat (usec) : 250=95.69%, 500=0.19% 00:31:34.996 lat (msec) : 50=4.12% 00:31:34.996 cpu : usr=0.30%, sys=1.10%, ctx=535, majf=0, minf=1 00:31:34.996 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:34.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.996 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.996 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:34.996 job3: (groupid=0, jobs=1): err= 0: pid=1691482: Sun Nov 17 14:41:23 2024 00:31:34.996 read: IOPS=22, BW=91.5KiB/s (93.7kB/s)(92.0KiB/1005msec) 00:31:34.996 slat (nsec): min=8828, max=23938, avg=21392.91, stdev=3808.65 00:31:34.996 clat (usec): min=234, max=42014, avg=39247.55, stdev=8508.01 00:31:34.996 lat (usec): min=256, max=42038, avg=39268.94, stdev=8507.84 00:31:34.996 clat percentiles (usec): 00:31:34.996 | 1.00th=[ 235], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:34.996 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:34.997 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:34.997 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:34.997 | 99.99th=[42206] 00:31:34.997 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:31:34.997 slat (nsec): min=9863, max=39683, avg=11061.32, stdev=1550.22 00:31:34.997 clat (usec): min=152, max=360, avg=184.95, stdev=21.92 00:31:34.997 lat (usec): min=162, max=400, avg=196.01, stdev=22.51 00:31:34.997 clat percentiles (usec): 00:31:34.997 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 172], 00:31:34.997 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:31:34.997 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 212], 95.00th=[ 237], 00:31:34.997 | 99.00th=[ 251], 99.50th=[ 269], 99.90th=[ 363], 99.95th=[ 363], 00:31:34.997 | 99.99th=[ 363] 00:31:34.997 bw ( KiB/s): min= 4096, max= 4096, per=34.50%, avg=4096.00, stdev= 0.00, samples=1 00:31:34.997 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:34.997 lat (usec) : 250=94.77%, 500=1.12% 00:31:34.997 lat (msec) : 50=4.11% 00:31:34.997 cpu : usr=0.40%, sys=0.40%, ctx=535, majf=0, minf=2 00:31:34.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:34.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.997 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:34.997 00:31:34.997 Run status group 0 (all jobs): 00:31:34.997 READ: bw=5836KiB/s (5976kB/s), 87.8KiB/s-3266KiB/s (89.9kB/s-3345kB/s), io=6040KiB (6185kB), run=1002-1035msec 00:31:34.997 WRITE: bw=11.6MiB/s (12.2MB/s), 2038KiB/s-4039KiB/s (2087kB/s-4136kB/s), io=12.0MiB (12.6MB), run=1002-1035msec 00:31:34.997 00:31:34.997 Disk stats (read/write): 00:31:34.997 nvme0n1: ios=681/1024, merge=0/0, ticks=1356/160, in_queue=1516, util=85.87% 00:31:34.997 nvme0n2: ios=871/1024, merge=0/0, ticks=1252/152, in_queue=1404, util=89.94% 00:31:34.997 nvme0n3: ios=75/512, merge=0/0, ticks=1625/88, in_queue=1713, util=93.44% 00:31:34.997 nvme0n4: ios=76/512, merge=0/0, ticks=803/97, in_queue=900, util=95.18% 00:31:34.997 14:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:34.997 [global] 00:31:34.997 thread=1 00:31:34.997 invalidate=1 00:31:34.997 rw=randwrite 00:31:34.997 time_based=1 00:31:34.997 runtime=1 00:31:34.997 ioengine=libaio 00:31:34.997 direct=1 00:31:34.997 bs=4096 00:31:34.997 iodepth=1 00:31:34.997 norandommap=0 00:31:34.997 numjobs=1 00:31:34.997 00:31:34.997 verify_dump=1 00:31:34.997 verify_backlog=512 00:31:34.997 verify_state_save=0 00:31:34.997 do_verify=1 00:31:34.997 verify=crc32c-intel 00:31:34.997 [job0] 00:31:34.997 filename=/dev/nvme0n1 00:31:34.997 [job1] 00:31:34.997 filename=/dev/nvme0n2 00:31:34.997 [job2] 00:31:34.997 filename=/dev/nvme0n3 00:31:34.997 [job3] 00:31:34.997 filename=/dev/nvme0n4 00:31:34.997 Could not set queue depth (nvme0n1) 00:31:34.997 Could not set queue depth (nvme0n2) 00:31:34.997 Could not set queue depth (nvme0n3) 00:31:34.997 Could not set queue depth (nvme0n4) 00:31:34.997 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:34.997 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:34.997 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:34.997 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:34.997 fio-3.35 00:31:34.997 Starting 4 threads 00:31:36.377 00:31:36.377 job0: (groupid=0, jobs=1): err= 0: pid=1691855: Sun Nov 17 14:41:25 2024 00:31:36.377 read: IOPS=1021, BW=4087KiB/s (4185kB/s)(4116KiB/1007msec) 00:31:36.377 slat (nsec): min=6892, max=22701, avg=8359.35, stdev=1628.92 00:31:36.377 clat (usec): min=188, max=41061, avg=707.93, stdev=4367.57 00:31:36.377 lat (usec): min=196, max=41072, avg=716.29, stdev=4368.04 00:31:36.377 clat percentiles (usec): 00:31:36.377 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 208], 00:31:36.377 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 237], 60.00th=[ 243], 00:31:36.377 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 260], 95.00th=[ 310], 00:31:36.377 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:36.377 | 99.99th=[41157] 00:31:36.377 write: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec); 0 zone resets 00:31:36.377 slat (nsec): min=9864, max=67304, avg=11258.19, stdev=2070.00 00:31:36.377 clat (usec): min=133, max=282, avg=159.15, stdev=12.25 00:31:36.377 lat (usec): min=143, max=343, avg=170.41, stdev=13.03 00:31:36.377 clat percentiles (usec): 00:31:36.377 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:31:36.377 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:31:36.377 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 182], 00:31:36.377 | 99.00th=[ 196], 99.50th=[ 202], 99.90th=[ 277], 99.95th=[ 285], 00:31:36.377 | 99.99th=[ 285] 00:31:36.377 bw ( KiB/s): min= 4096, max= 8192, per=29.80%, avg=6144.00, stdev=2896.31, samples=2 00:31:36.377 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:31:36.377 lat (usec) : 250=92.32%, 500=7.21% 00:31:36.377 lat (msec) : 50=0.47% 00:31:36.377 cpu : usr=1.99%, sys=4.17%, ctx=2566, majf=0, minf=1 00:31:36.377 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.377 issued rwts: total=1029,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:36.377 job1: (groupid=0, jobs=1): err= 0: pid=1691856: Sun Nov 17 14:41:25 2024 00:31:36.377 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:31:36.377 slat (nsec): min=7152, max=42026, avg=8514.32, stdev=1560.27 00:31:36.377 clat (usec): min=181, max=420, avg=207.22, stdev=15.95 00:31:36.377 lat (usec): min=189, max=428, avg=215.74, stdev=16.26 00:31:36.377 clat percentiles (usec): 00:31:36.377 | 1.00th=[ 186], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 194], 00:31:36.377 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:31:36.377 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 227], 95.00th=[ 233], 00:31:36.377 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 396], 99.95th=[ 412], 00:31:36.377 | 99.99th=[ 420] 00:31:36.377 write: IOPS=2627, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec); 0 zone resets 00:31:36.377 slat (nsec): min=10195, max=48818, avg=12343.84, stdev=2475.14 00:31:36.377 clat (usec): min=120, max=418, avg=151.83, stdev=19.35 00:31:36.377 lat (usec): min=135, max=429, avg=164.17, stdev=20.55 00:31:36.377 clat percentiles (usec): 00:31:36.377 | 1.00th=[ 129], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 135], 00:31:36.377 | 30.00th=[ 137], 40.00th=[ 145], 50.00th=[ 151], 60.00th=[ 155], 00:31:36.377 | 70.00th=[ 159], 80.00th=[ 167], 90.00th=[ 178], 95.00th=[ 186], 00:31:36.377 | 99.00th=[ 200], 99.50th=[ 208], 99.90th=[ 326], 99.95th=[ 338], 00:31:36.377 | 99.99th=[ 420] 00:31:36.377 bw ( KiB/s): min=12288, max=12288, per=59.61%, avg=12288.00, stdev= 0.00, samples=1 00:31:36.377 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:36.377 lat (usec) : 250=99.56%, 500=0.44% 00:31:36.377 cpu : usr=5.30%, sys=7.50%, ctx=5191, majf=0, minf=1 00:31:36.377 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.377 issued rwts: total=2560,2630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:36.377 job2: (groupid=0, jobs=1): err= 0: pid=1691857: Sun Nov 17 14:41:25 2024 00:31:36.377 read: IOPS=26, BW=107KiB/s (110kB/s)(108KiB/1006msec) 00:31:36.377 slat (nsec): min=7743, max=25434, avg=18492.37, stdev=6510.88 00:31:36.377 clat (usec): min=236, max=41475, avg=33438.19, stdev=16116.63 00:31:36.377 lat (usec): min=245, max=41485, avg=33456.68, stdev=16120.49 00:31:36.377 clat percentiles (usec): 00:31:36.377 | 1.00th=[ 237], 5.00th=[ 255], 10.00th=[ 265], 20.00th=[40633], 00:31:36.377 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:36.377 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:36.377 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:31:36.377 | 99.99th=[41681] 00:31:36.377 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:31:36.377 slat (nsec): min=10088, max=36810, avg=11311.08, stdev=1517.71 00:31:36.377 clat (usec): min=156, max=321, avg=185.69, stdev=15.21 00:31:36.377 lat (usec): min=166, max=347, avg=197.01, stdev=15.87 00:31:36.377 clat percentiles (usec): 00:31:36.377 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 176], 00:31:36.377 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 186], 00:31:36.377 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 210], 00:31:36.377 | 99.00th=[ 249], 99.50th=[ 253], 99.90th=[ 322], 99.95th=[ 322], 00:31:36.377 | 99.99th=[ 322] 00:31:36.378 bw ( KiB/s): min= 4096, max= 4096, per=19.87%, avg=4096.00, stdev= 0.00, samples=1 00:31:36.378 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:36.378 lat (usec) : 250=94.43%, 500=1.48% 00:31:36.378 lat (msec) : 50=4.08% 00:31:36.378 cpu : usr=0.40%, sys=0.50%, ctx=540, majf=0, minf=1 00:31:36.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.378 issued rwts: total=27,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:36.378 job3: (groupid=0, jobs=1): err= 0: pid=1691858: Sun Nov 17 14:41:25 2024 00:31:36.378 read: IOPS=190, BW=763KiB/s (781kB/s)(768KiB/1007msec) 00:31:36.378 slat (nsec): min=7386, max=38317, avg=10312.56, stdev=4960.41 00:31:36.378 clat (usec): min=203, max=41115, avg=4704.61, stdev=12578.53 00:31:36.378 lat (usec): min=211, max=41124, avg=4714.92, stdev=12581.04 00:31:36.378 clat percentiles (usec): 00:31:36.378 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 229], 20.00th=[ 243], 00:31:36.378 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 293], 00:31:36.378 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[40633], 95.00th=[41157], 00:31:36.378 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:36.378 | 99.99th=[41157] 00:31:36.378 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:31:36.378 slat (nsec): min=10916, max=42267, avg=12392.79, stdev=2197.46 00:31:36.378 clat (usec): min=155, max=343, avg=180.47, stdev=13.92 00:31:36.378 lat (usec): min=166, max=386, avg=192.86, stdev=14.75 00:31:36.378 clat percentiles (usec): 00:31:36.378 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:31:36.378 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 182], 00:31:36.378 | 70.00th=[ 186], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 202], 00:31:36.378 | 99.00th=[ 219], 99.50th=[ 229], 99.90th=[ 343], 99.95th=[ 343], 00:31:36.378 | 99.99th=[ 343] 00:31:36.378 bw ( KiB/s): min= 4096, max= 4096, per=19.87%, avg=4096.00, stdev= 0.00, samples=1 00:31:36.378 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:36.378 lat (usec) : 250=82.24%, 500=14.63% 00:31:36.378 lat (msec) : 10=0.14%, 50=2.98% 00:31:36.378 cpu : usr=0.60%, sys=1.19%, ctx=706, majf=0, minf=1 00:31:36.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.378 issued rwts: total=192,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:36.378 00:31:36.378 Run status group 0 (all jobs): 00:31:36.378 READ: bw=14.8MiB/s (15.5MB/s), 107KiB/s-9.99MiB/s (110kB/s-10.5MB/s), io=14.9MiB (15.6MB), run=1001-1007msec 00:31:36.378 WRITE: bw=20.1MiB/s (21.1MB/s), 2034KiB/s-10.3MiB/s (2083kB/s-10.8MB/s), io=20.3MiB (21.3MB), run=1001-1007msec 00:31:36.378 00:31:36.378 Disk stats (read/write): 00:31:36.378 nvme0n1: ios=1074/1536, merge=0/0, ticks=570/233, in_queue=803, util=86.87% 00:31:36.378 nvme0n2: ios=2099/2414, merge=0/0, ticks=946/338, in_queue=1284, util=98.38% 00:31:36.378 nvme0n3: ios=64/512, merge=0/0, ticks=1867/92, in_queue=1959, util=97.09% 00:31:36.378 nvme0n4: ios=231/512, merge=0/0, ticks=1292/92, in_queue=1384, util=97.80% 00:31:36.378 14:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:36.378 [global] 00:31:36.378 thread=1 00:31:36.378 invalidate=1 00:31:36.378 rw=write 00:31:36.378 time_based=1 00:31:36.378 runtime=1 00:31:36.378 ioengine=libaio 00:31:36.378 direct=1 00:31:36.378 bs=4096 00:31:36.378 iodepth=128 00:31:36.378 norandommap=0 00:31:36.378 numjobs=1 00:31:36.378 00:31:36.378 verify_dump=1 00:31:36.378 verify_backlog=512 00:31:36.378 verify_state_save=0 00:31:36.378 do_verify=1 00:31:36.378 verify=crc32c-intel 00:31:36.378 [job0] 00:31:36.378 filename=/dev/nvme0n1 00:31:36.378 [job1] 00:31:36.378 filename=/dev/nvme0n2 00:31:36.378 [job2] 00:31:36.378 filename=/dev/nvme0n3 00:31:36.378 [job3] 00:31:36.378 filename=/dev/nvme0n4 00:31:36.378 Could not set queue depth (nvme0n1) 00:31:36.378 Could not set queue depth (nvme0n2) 00:31:36.378 Could not set queue depth (nvme0n3) 00:31:36.378 Could not set queue depth (nvme0n4) 00:31:36.637 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:36.637 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:36.637 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:36.637 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:36.637 fio-3.35 00:31:36.637 Starting 4 threads 00:31:38.016 00:31:38.016 job0: (groupid=0, jobs=1): err= 0: pid=1692223: Sun Nov 17 14:41:27 2024 00:31:38.016 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:31:38.016 slat (nsec): min=1430, max=11360k, avg=108686.82, stdev=795406.88 00:31:38.016 clat (usec): min=4295, max=33480, avg=13675.65, stdev=4090.16 00:31:38.016 lat (usec): min=4306, max=33485, avg=13784.34, stdev=4151.17 00:31:38.016 clat percentiles (usec): 00:31:38.016 | 1.00th=[ 6259], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10290], 00:31:38.016 | 30.00th=[11600], 40.00th=[12387], 50.00th=[12911], 60.00th=[13435], 00:31:38.016 | 70.00th=[14091], 80.00th=[16450], 90.00th=[19006], 95.00th=[20579], 00:31:38.016 | 99.00th=[29754], 99.50th=[31589], 99.90th=[33424], 99.95th=[33424], 00:31:38.016 | 99.99th=[33424] 00:31:38.016 write: IOPS=4255, BW=16.6MiB/s (17.4MB/s)(16.8MiB/1011msec); 0 zone resets 00:31:38.016 slat (usec): min=2, max=11294, avg=122.98, stdev=736.17 00:31:38.016 clat (usec): min=1440, max=63744, avg=16776.99, stdev=12003.48 00:31:38.016 lat (usec): min=1452, max=63752, avg=16899.97, stdev=12078.80 00:31:38.016 clat percentiles (usec): 00:31:38.016 | 1.00th=[ 5211], 5.00th=[ 7111], 10.00th=[ 7898], 20.00th=[ 8586], 00:31:38.016 | 30.00th=[ 9896], 40.00th=[11338], 50.00th=[12125], 60.00th=[13435], 00:31:38.016 | 70.00th=[16188], 80.00th=[25822], 90.00th=[32113], 95.00th=[44303], 00:31:38.016 | 99.00th=[63177], 99.50th=[63177], 99.90th=[63701], 99.95th=[63701], 00:31:38.016 | 99.99th=[63701] 00:31:38.016 bw ( KiB/s): min=12864, max=20528, per=26.25%, avg=16696.00, stdev=5419.27, samples=2 00:31:38.016 iops : min= 3216, max= 5132, avg=4174.00, stdev=1354.82, samples=2 00:31:38.016 lat (msec) : 2=0.10%, 4=0.14%, 10=21.68%, 20=61.97%, 50=14.07% 00:31:38.016 lat (msec) : 100=2.04% 00:31:38.016 cpu : usr=3.66%, sys=5.25%, ctx=281, majf=0, minf=1 00:31:38.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:38.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:38.016 issued rwts: total=4096,4302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:38.016 job1: (groupid=0, jobs=1): err= 0: pid=1692224: Sun Nov 17 14:41:27 2024 00:31:38.016 read: IOPS=5178, BW=20.2MiB/s (21.2MB/s)(20.3MiB/1003msec) 00:31:38.016 slat (nsec): min=1395, max=11039k, avg=81186.02, stdev=504450.56 00:31:38.016 clat (usec): min=2186, max=40645, avg=10422.05, stdev=4942.38 00:31:38.016 lat (usec): min=2291, max=40653, avg=10503.24, stdev=4985.18 00:31:38.016 clat percentiles (usec): 00:31:38.016 | 1.00th=[ 4752], 5.00th=[ 6652], 10.00th=[ 7308], 20.00th=[ 7832], 00:31:38.016 | 30.00th=[ 7963], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[10159], 00:31:38.016 | 70.00th=[11338], 80.00th=[11731], 90.00th=[13566], 95.00th=[18744], 00:31:38.016 | 99.00th=[33162], 99.50th=[38536], 99.90th=[40633], 99.95th=[40633], 00:31:38.016 | 99.99th=[40633] 00:31:38.016 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:31:38.016 slat (usec): min=2, max=21978, avg=98.08, stdev=776.59 00:31:38.016 clat (usec): min=4838, max=63729, avg=12617.67, stdev=10028.35 00:31:38.016 lat (usec): min=4875, max=63763, avg=12715.75, stdev=10119.03 00:31:38.016 clat percentiles (usec): 00:31:38.016 | 1.00th=[ 5800], 5.00th=[ 7373], 10.00th=[ 7767], 20.00th=[ 7963], 00:31:38.016 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 9372], 00:31:38.016 | 70.00th=[10945], 80.00th=[11338], 90.00th=[28967], 95.00th=[40109], 00:31:38.016 | 99.00th=[51643], 99.50th=[53740], 99.90th=[53740], 99.95th=[57410], 00:31:38.016 | 99.99th=[63701] 00:31:38.016 bw ( KiB/s): min=13664, max=30968, per=35.08%, avg=22316.00, stdev=12235.78, samples=2 00:31:38.016 iops : min= 3416, max= 7742, avg=5579.00, stdev=3058.94, samples=2 00:31:38.016 lat (msec) : 4=0.30%, 10=60.98%, 20=30.22%, 50=7.67%, 100=0.82% 00:31:38.016 cpu : usr=3.69%, sys=5.09%, ctx=517, majf=0, minf=1 00:31:38.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:38.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:38.016 issued rwts: total=5194,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:38.016 job2: (groupid=0, jobs=1): err= 0: pid=1692226: Sun Nov 17 14:41:27 2024 00:31:38.016 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.1MiB/1008msec) 00:31:38.016 slat (nsec): min=1772, max=13743k, avg=126259.42, stdev=874105.04 00:31:38.016 clat (usec): min=5531, max=40619, avg=14387.55, stdev=5633.44 00:31:38.016 lat (usec): min=5570, max=40626, avg=14513.81, stdev=5716.01 00:31:38.016 clat percentiles (usec): 00:31:38.016 | 1.00th=[ 5997], 5.00th=[ 8717], 10.00th=[10028], 20.00th=[10945], 00:31:38.016 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12911], 60.00th=[13304], 00:31:38.016 | 70.00th=[14222], 80.00th=[17433], 90.00th=[19792], 95.00th=[26346], 00:31:38.016 | 99.00th=[35390], 99.50th=[38011], 99.90th=[40633], 99.95th=[40633], 00:31:38.016 | 99.99th=[40633] 00:31:38.016 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:31:38.016 slat (usec): min=2, max=10562, avg=162.05, stdev=727.64 00:31:38.016 clat (usec): min=1601, max=72045, avg=23271.88, stdev=12908.33 00:31:38.016 lat (usec): min=1614, max=72063, avg=23433.93, stdev=12989.62 00:31:38.016 clat percentiles (usec): 00:31:38.016 | 1.00th=[ 5800], 5.00th=[ 7963], 10.00th=[ 8717], 20.00th=[10814], 00:31:38.016 | 30.00th=[13566], 40.00th=[19530], 50.00th=[24249], 60.00th=[26084], 00:31:38.016 | 70.00th=[27657], 80.00th=[29754], 90.00th=[34866], 95.00th=[52691], 00:31:38.016 | 99.00th=[66323], 99.50th=[69731], 99.90th=[71828], 99.95th=[71828], 00:31:38.016 | 99.99th=[71828] 00:31:38.016 bw ( KiB/s): min=11384, max=16384, per=21.83%, avg=13884.00, stdev=3535.53, samples=2 00:31:38.016 iops : min= 2846, max= 4096, avg=3471.00, stdev=883.88, samples=2 00:31:38.016 lat (msec) : 2=0.03%, 4=0.12%, 10=13.55%, 20=49.92%, 50=33.01% 00:31:38.016 lat (msec) : 100=3.37% 00:31:38.016 cpu : usr=2.98%, sys=4.07%, ctx=365, majf=0, minf=1 00:31:38.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:31:38.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:38.016 issued rwts: total=3087,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:38.016 job3: (groupid=0, jobs=1): err= 0: pid=1692231: Sun Nov 17 14:41:27 2024 00:31:38.016 read: IOPS=2195, BW=8780KiB/s (8991kB/s)(8824KiB/1005msec) 00:31:38.016 slat (nsec): min=1742, max=17539k, avg=199045.38, stdev=1239933.22 00:31:38.016 clat (usec): min=2314, max=53690, avg=23440.41, stdev=8782.99 00:31:38.016 lat (usec): min=5804, max=53704, avg=23639.45, stdev=8867.16 00:31:38.016 clat percentiles (usec): 00:31:38.016 | 1.00th=[ 5932], 5.00th=[12911], 10.00th=[12911], 20.00th=[16712], 00:31:38.016 | 30.00th=[18482], 40.00th=[19530], 50.00th=[21365], 60.00th=[22938], 00:31:38.016 | 70.00th=[26608], 80.00th=[30278], 90.00th=[35914], 95.00th=[42206], 00:31:38.016 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51643], 99.95th=[51643], 00:31:38.016 | 99.99th=[53740] 00:31:38.016 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:31:38.016 slat (usec): min=2, max=20942, avg=213.38, stdev=1262.17 00:31:38.016 clat (usec): min=14136, max=57056, avg=29239.66, stdev=8437.66 00:31:38.016 lat (usec): min=14158, max=57069, avg=29453.03, stdev=8512.02 00:31:38.016 clat percentiles (usec): 00:31:38.016 | 1.00th=[18220], 5.00th=[19530], 10.00th=[20317], 20.00th=[23725], 00:31:38.016 | 30.00th=[23987], 40.00th=[25822], 50.00th=[26084], 60.00th=[26608], 00:31:38.016 | 70.00th=[32113], 80.00th=[35914], 90.00th=[39584], 95.00th=[49021], 00:31:38.016 | 99.00th=[55313], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:31:38.016 | 99.99th=[56886] 00:31:38.016 bw ( KiB/s): min= 8744, max=11736, per=16.10%, avg=10240.00, stdev=2115.66, samples=2 00:31:38.016 iops : min= 2186, max= 2934, avg=2560.00, stdev=528.92, samples=2 00:31:38.016 lat (msec) : 4=0.02%, 10=1.32%, 20=23.04%, 50=72.95%, 100=2.66% 00:31:38.016 cpu : usr=2.79%, sys=2.89%, ctx=239, majf=0, minf=1 00:31:38.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:31:38.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:38.017 issued rwts: total=2206,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.017 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:38.017 00:31:38.017 Run status group 0 (all jobs): 00:31:38.017 READ: bw=56.3MiB/s (59.1MB/s), 8780KiB/s-20.2MiB/s (8991kB/s-21.2MB/s), io=57.0MiB (59.7MB), run=1003-1011msec 00:31:38.017 WRITE: bw=62.1MiB/s (65.1MB/s), 9.95MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=62.8MiB (65.9MB), run=1003-1011msec 00:31:38.017 00:31:38.017 Disk stats (read/write): 00:31:38.017 nvme0n1: ios=3604/3631, merge=0/0, ticks=48474/49098, in_queue=97572, util=87.86% 00:31:38.017 nvme0n2: ios=3635/4093, merge=0/0, ticks=14640/18866, in_queue=33506, util=97.22% 00:31:38.017 nvme0n3: ios=2582/3055, merge=0/0, ticks=31493/53851, in_queue=85344, util=98.59% 00:31:38.017 nvme0n4: ios=1822/2048, merge=0/0, ticks=23571/26348, in_queue=49919, util=95.90% 00:31:38.017 14:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:38.017 [global] 00:31:38.017 thread=1 00:31:38.017 invalidate=1 00:31:38.017 rw=randwrite 00:31:38.017 time_based=1 00:31:38.017 runtime=1 00:31:38.017 ioengine=libaio 00:31:38.017 direct=1 00:31:38.017 bs=4096 00:31:38.017 iodepth=128 00:31:38.017 norandommap=0 00:31:38.017 numjobs=1 00:31:38.017 00:31:38.017 verify_dump=1 00:31:38.017 verify_backlog=512 00:31:38.017 verify_state_save=0 00:31:38.017 do_verify=1 00:31:38.017 verify=crc32c-intel 00:31:38.017 [job0] 00:31:38.017 filename=/dev/nvme0n1 00:31:38.017 [job1] 00:31:38.017 filename=/dev/nvme0n2 00:31:38.017 [job2] 00:31:38.017 filename=/dev/nvme0n3 00:31:38.017 [job3] 00:31:38.017 filename=/dev/nvme0n4 00:31:38.017 Could not set queue depth (nvme0n1) 00:31:38.017 Could not set queue depth (nvme0n2) 00:31:38.017 Could not set queue depth (nvme0n3) 00:31:38.017 Could not set queue depth (nvme0n4) 00:31:38.276 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:38.276 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:38.276 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:38.276 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:38.276 fio-3.35 00:31:38.276 Starting 4 threads 00:31:39.655 00:31:39.655 job0: (groupid=0, jobs=1): err= 0: pid=1692600: Sun Nov 17 14:41:28 2024 00:31:39.655 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:31:39.655 slat (nsec): min=1394, max=13773k, avg=98050.82, stdev=782933.96 00:31:39.655 clat (usec): min=3156, max=38487, avg=12299.07, stdev=4939.01 00:31:39.655 lat (usec): min=3168, max=38495, avg=12397.12, stdev=4999.04 00:31:39.656 clat percentiles (usec): 00:31:39.656 | 1.00th=[ 7308], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:31:39.656 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10290], 60.00th=[11469], 00:31:39.656 | 70.00th=[13042], 80.00th=[13829], 90.00th=[17171], 95.00th=[21365], 00:31:39.656 | 99.00th=[34341], 99.50th=[35914], 99.90th=[38011], 99.95th=[38536], 00:31:39.656 | 99.99th=[38536] 00:31:39.656 write: IOPS=5644, BW=22.0MiB/s (23.1MB/s)(22.1MiB/1003msec); 0 zone resets 00:31:39.656 slat (usec): min=2, max=8561, avg=72.72, stdev=568.05 00:31:39.656 clat (usec): min=1357, max=33611, avg=10202.24, stdev=2931.27 00:31:39.656 lat (usec): min=2037, max=33621, avg=10274.97, stdev=2967.78 00:31:39.656 clat percentiles (usec): 00:31:39.656 | 1.00th=[ 3687], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 8225], 00:31:39.656 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:31:39.656 | 70.00th=[10552], 80.00th=[12518], 90.00th=[13042], 95.00th=[14091], 00:31:39.656 | 99.00th=[21103], 99.50th=[24511], 99.90th=[33817], 99.95th=[33817], 00:31:39.656 | 99.99th=[33817] 00:31:39.656 bw ( KiB/s): min=19928, max=25077, per=30.95%, avg=22502.50, stdev=3640.89, samples=2 00:31:39.656 iops : min= 4982, max= 6269, avg=5625.50, stdev=910.05, samples=2 00:31:39.656 lat (msec) : 2=0.01%, 4=0.65%, 10=43.49%, 20=52.47%, 50=3.38% 00:31:39.656 cpu : usr=4.69%, sys=7.09%, ctx=322, majf=0, minf=1 00:31:39.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:39.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:39.656 issued rwts: total=5632,5661,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:39.656 job1: (groupid=0, jobs=1): err= 0: pid=1692601: Sun Nov 17 14:41:28 2024 00:31:39.656 read: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec) 00:31:39.656 slat (nsec): min=1421, max=10143k, avg=81078.20, stdev=653625.08 00:31:39.656 clat (usec): min=4271, max=22721, avg=10611.43, stdev=2498.98 00:31:39.656 lat (usec): min=4275, max=23236, avg=10692.51, stdev=2560.84 00:31:39.656 clat percentiles (usec): 00:31:39.656 | 1.00th=[ 5276], 5.00th=[ 7767], 10.00th=[ 8225], 20.00th=[ 9241], 00:31:39.656 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:31:39.656 | 70.00th=[10683], 80.00th=[11863], 90.00th=[14484], 95.00th=[16319], 00:31:39.656 | 99.00th=[18220], 99.50th=[19006], 99.90th=[22676], 99.95th=[22676], 00:31:39.656 | 99.99th=[22676] 00:31:39.656 write: IOPS=6309, BW=24.6MiB/s (25.8MB/s)(24.8MiB/1007msec); 0 zone resets 00:31:39.656 slat (usec): min=2, max=8635, avg=73.51, stdev=548.70 00:31:39.656 clat (usec): min=1578, max=19700, avg=9836.32, stdev=2260.52 00:31:39.656 lat (usec): min=1593, max=19703, avg=9909.82, stdev=2291.74 00:31:39.656 clat percentiles (usec): 00:31:39.656 | 1.00th=[ 4621], 5.00th=[ 6259], 10.00th=[ 6783], 20.00th=[ 8029], 00:31:39.656 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10290], 00:31:39.656 | 70.00th=[10552], 80.00th=[10814], 90.00th=[13304], 95.00th=[14091], 00:31:39.656 | 99.00th=[15926], 99.50th=[16909], 99.90th=[18482], 99.95th=[18744], 00:31:39.656 | 99.99th=[19792] 00:31:39.656 bw ( KiB/s): min=24694, max=25064, per=34.22%, avg=24879.00, stdev=261.63, samples=2 00:31:39.656 iops : min= 6173, max= 6266, avg=6219.50, stdev=65.76, samples=2 00:31:39.656 lat (msec) : 2=0.02%, 4=0.26%, 10=49.90%, 20=49.71%, 50=0.11% 00:31:39.656 cpu : usr=4.37%, sys=7.46%, ctx=433, majf=0, minf=1 00:31:39.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:31:39.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:39.656 issued rwts: total=6144,6354,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:39.656 job2: (groupid=0, jobs=1): err= 0: pid=1692605: Sun Nov 17 14:41:28 2024 00:31:39.656 read: IOPS=3570, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1002msec) 00:31:39.656 slat (nsec): min=1655, max=22639k, avg=147213.77, stdev=1182568.19 00:31:39.656 clat (usec): min=836, max=54692, avg=18340.60, stdev=9324.08 00:31:39.656 lat (usec): min=4442, max=54699, avg=18487.81, stdev=9396.96 00:31:39.656 clat percentiles (usec): 00:31:39.656 | 1.00th=[ 4621], 5.00th=[10028], 10.00th=[11076], 20.00th=[11469], 00:31:39.656 | 30.00th=[11731], 40.00th=[12387], 50.00th=[14222], 60.00th=[17957], 00:31:39.656 | 70.00th=[21627], 80.00th=[23725], 90.00th=[30802], 95.00th=[37487], 00:31:39.656 | 99.00th=[50594], 99.50th=[51643], 99.90th=[53216], 99.95th=[54789], 00:31:39.656 | 99.99th=[54789] 00:31:39.656 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:31:39.656 slat (usec): min=2, max=28477, avg=125.33, stdev=913.78 00:31:39.656 clat (usec): min=4193, max=51778, avg=17089.06, stdev=6493.14 00:31:39.656 lat (usec): min=4203, max=51782, avg=17214.39, stdev=6553.95 00:31:39.656 clat percentiles (usec): 00:31:39.656 | 1.00th=[ 7963], 5.00th=[10683], 10.00th=[11469], 20.00th=[11731], 00:31:39.656 | 30.00th=[11863], 40.00th=[11994], 50.00th=[15270], 60.00th=[17695], 00:31:39.656 | 70.00th=[21627], 80.00th=[22414], 90.00th=[23200], 95.00th=[31327], 00:31:39.656 | 99.00th=[32637], 99.50th=[32637], 99.90th=[42206], 99.95th=[51643], 00:31:39.656 | 99.99th=[51643] 00:31:39.656 bw ( KiB/s): min=12263, max=16384, per=19.70%, avg=14323.50, stdev=2913.99, samples=2 00:31:39.656 iops : min= 3065, max= 4096, avg=3580.50, stdev=729.03, samples=2 00:31:39.656 lat (usec) : 1000=0.01% 00:31:39.656 lat (msec) : 10=4.69%, 20=59.77%, 50=34.95%, 100=0.57% 00:31:39.656 cpu : usr=3.30%, sys=3.70%, ctx=356, majf=0, minf=1 00:31:39.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:31:39.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:39.656 issued rwts: total=3578,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:39.656 job3: (groupid=0, jobs=1): err= 0: pid=1692610: Sun Nov 17 14:41:28 2024 00:31:39.656 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:31:39.656 slat (nsec): min=1397, max=20216k, avg=181184.98, stdev=1310584.37 00:31:39.656 clat (usec): min=8935, max=56364, avg=22859.56, stdev=8090.23 00:31:39.656 lat (usec): min=9110, max=56374, avg=23040.74, stdev=8154.88 00:31:39.656 clat percentiles (usec): 00:31:39.656 | 1.00th=[ 9110], 5.00th=[11731], 10.00th=[12911], 20.00th=[16188], 00:31:39.656 | 30.00th=[19268], 40.00th=[20579], 50.00th=[21627], 60.00th=[22938], 00:31:39.656 | 70.00th=[25822], 80.00th=[29230], 90.00th=[31851], 95.00th=[36963], 00:31:39.656 | 99.00th=[55313], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:31:39.656 | 99.99th=[56361] 00:31:39.656 write: IOPS=2730, BW=10.7MiB/s (11.2MB/s)(10.8MiB/1010msec); 0 zone resets 00:31:39.656 slat (usec): min=2, max=18822, avg=188.97, stdev=1054.57 00:31:39.656 clat (usec): min=1489, max=55973, avg=25226.35, stdev=10822.72 00:31:39.656 lat (usec): min=1503, max=55994, avg=25415.32, stdev=10902.70 00:31:39.656 clat percentiles (usec): 00:31:39.656 | 1.00th=[ 9634], 5.00th=[10683], 10.00th=[15926], 20.00th=[17695], 00:31:39.656 | 30.00th=[20841], 40.00th=[21890], 50.00th=[22414], 60.00th=[22676], 00:31:39.656 | 70.00th=[23200], 80.00th=[32637], 90.00th=[46924], 95.00th=[49021], 00:31:39.656 | 99.00th=[51643], 99.50th=[51643], 99.90th=[51643], 99.95th=[55837], 00:31:39.656 | 99.99th=[55837] 00:31:39.656 bw ( KiB/s): min= 8752, max=12263, per=14.45%, avg=10507.50, stdev=2482.65, samples=2 00:31:39.656 iops : min= 2188, max= 3065, avg=2626.50, stdev=620.13, samples=2 00:31:39.656 lat (msec) : 2=0.11%, 10=2.28%, 20=28.53%, 50=66.72%, 100=2.37% 00:31:39.656 cpu : usr=2.08%, sys=3.57%, ctx=277, majf=0, minf=2 00:31:39.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:39.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:39.656 issued rwts: total=2560,2758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:39.656 00:31:39.657 Run status group 0 (all jobs): 00:31:39.657 READ: bw=69.3MiB/s (72.6MB/s), 9.90MiB/s-23.8MiB/s (10.4MB/s-25.0MB/s), io=70.0MiB (73.4MB), run=1002-1010msec 00:31:39.657 WRITE: bw=71.0MiB/s (74.4MB/s), 10.7MiB/s-24.6MiB/s (11.2MB/s-25.8MB/s), io=71.7MiB (75.2MB), run=1002-1010msec 00:31:39.657 00:31:39.657 Disk stats (read/write): 00:31:39.657 nvme0n1: ios=4626/5078, merge=0/0, ticks=46977/45067, in_queue=92044, util=88.58% 00:31:39.657 nvme0n2: ios=4756/5120, merge=0/0, ticks=47853/48375, in_queue=96228, util=90.53% 00:31:39.657 nvme0n3: ios=2500/2560, merge=0/0, ticks=49152/44140, in_queue=93292, util=96.41% 00:31:39.657 nvme0n4: ios=2065/2264, merge=0/0, ticks=46984/50035, in_queue=97019, util=89.36% 00:31:39.657 14:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:39.657 14:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1692832 00:31:39.657 14:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:39.657 14:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:39.657 [global] 00:31:39.657 thread=1 00:31:39.657 invalidate=1 00:31:39.657 rw=read 00:31:39.657 time_based=1 00:31:39.657 runtime=10 00:31:39.657 ioengine=libaio 00:31:39.657 direct=1 00:31:39.657 bs=4096 00:31:39.657 iodepth=1 00:31:39.657 norandommap=1 00:31:39.657 numjobs=1 00:31:39.657 00:31:39.657 [job0] 00:31:39.657 filename=/dev/nvme0n1 00:31:39.657 [job1] 00:31:39.657 filename=/dev/nvme0n2 00:31:39.657 [job2] 00:31:39.657 filename=/dev/nvme0n3 00:31:39.657 [job3] 00:31:39.657 filename=/dev/nvme0n4 00:31:39.657 Could not set queue depth (nvme0n1) 00:31:39.657 Could not set queue depth (nvme0n2) 00:31:39.657 Could not set queue depth (nvme0n3) 00:31:39.657 Could not set queue depth (nvme0n4) 00:31:39.916 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:39.916 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:39.916 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:39.916 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:39.916 fio-3.35 00:31:39.916 Starting 4 threads 00:31:43.212 14:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:43.212 14:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:43.212 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=299008, buflen=4096 00:31:43.212 fio: pid=1693031, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:43.212 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=44675072, buflen=4096 00:31:43.212 fio: pid=1693026, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:43.212 14:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:43.212 14:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:43.212 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=50126848, buflen=4096 00:31:43.212 fio: pid=1692997, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:43.212 14:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:43.212 14:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:43.472 14:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:43.472 14:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:43.472 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=483328, buflen=4096 00:31:43.472 fio: pid=1693009, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:43.472 00:31:43.472 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1692997: Sun Nov 17 14:41:32 2024 00:31:43.472 read: IOPS=3885, BW=15.2MiB/s (15.9MB/s)(47.8MiB/3150msec) 00:31:43.472 slat (usec): min=6, max=24883, avg=10.93, stdev=244.70 00:31:43.472 clat (usec): min=173, max=3860, avg=242.31, stdev=62.35 00:31:43.472 lat (usec): min=181, max=25184, avg=253.24, stdev=253.07 00:31:43.472 clat percentiles (usec): 00:31:43.473 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:31:43.473 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 243], 60.00th=[ 249], 00:31:43.473 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 277], 00:31:43.473 | 99.00th=[ 408], 99.50th=[ 441], 99.90th=[ 963], 99.95th=[ 1500], 00:31:43.473 | 99.99th=[ 1942] 00:31:43.473 bw ( KiB/s): min=14131, max=17216, per=56.80%, avg=15691.33, stdev=1176.83, samples=6 00:31:43.473 iops : min= 3532, max= 4304, avg=3922.67, stdev=294.42, samples=6 00:31:43.473 lat (usec) : 250=63.76%, 500=36.11%, 750=0.01%, 1000=0.02% 00:31:43.473 lat (msec) : 2=0.09%, 4=0.01% 00:31:43.473 cpu : usr=2.13%, sys=6.19%, ctx=12241, majf=0, minf=1 00:31:43.473 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.473 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.473 issued rwts: total=12239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.473 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:43.473 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1693009: Sun Nov 17 14:41:32 2024 00:31:43.473 read: IOPS=35, BW=140KiB/s (143kB/s)(472KiB/3379msec) 00:31:43.473 slat (usec): min=7, max=11871, avg=167.67, stdev=1201.45 00:31:43.473 clat (usec): min=219, max=43035, avg=28277.64, stdev=19014.28 00:31:43.473 lat (usec): min=227, max=53157, avg=28446.53, stdev=19160.64 00:31:43.473 clat percentiles (usec): 00:31:43.473 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 241], 20.00th=[ 249], 00:31:43.473 | 30.00th=[ 474], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:43.473 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:31:43.473 | 99.00th=[42206], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:31:43.473 | 99.99th=[43254] 00:31:43.473 bw ( KiB/s): min= 96, max= 384, per=0.52%, avg=145.83, stdev=116.71, samples=6 00:31:43.473 iops : min= 24, max= 96, avg=36.33, stdev=29.23, samples=6 00:31:43.473 lat (usec) : 250=21.01%, 500=9.24%, 750=0.84% 00:31:43.473 lat (msec) : 50=68.07% 00:31:43.473 cpu : usr=0.00%, sys=0.15%, ctx=125, majf=0, minf=2 00:31:43.473 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.473 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.473 issued rwts: total=119,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.473 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:43.473 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1693026: Sun Nov 17 14:41:32 2024 00:31:43.473 read: IOPS=3725, BW=14.6MiB/s (15.3MB/s)(42.6MiB/2928msec) 00:31:43.473 slat (nsec): min=6802, max=45121, avg=8258.44, stdev=1379.71 00:31:43.473 clat (usec): min=172, max=1912, avg=256.29, stdev=79.25 00:31:43.473 lat (usec): min=179, max=1920, avg=264.55, stdev=79.27 00:31:43.473 clat percentiles (usec): 00:31:43.473 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:31:43.473 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 229], 00:31:43.473 | 70.00th=[ 235], 80.00th=[ 253], 90.00th=[ 408], 95.00th=[ 412], 00:31:43.473 | 99.00th=[ 424], 99.50th=[ 429], 99.90th=[ 529], 99.95th=[ 676], 00:31:43.473 | 99.99th=[ 1876] 00:31:43.473 bw ( KiB/s): min= 9548, max=17584, per=53.37%, avg=14744.80, stdev=3522.55, samples=5 00:31:43.473 iops : min= 2387, max= 4396, avg=3686.20, stdev=880.64, samples=5 00:31:43.473 lat (usec) : 250=79.21%, 500=20.65%, 750=0.09% 00:31:43.473 lat (msec) : 2=0.05% 00:31:43.473 cpu : usr=1.95%, sys=6.15%, ctx=10908, majf=0, minf=2 00:31:43.473 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.473 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.473 issued rwts: total=10908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.473 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:43.473 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1693031: Sun Nov 17 14:41:32 2024 00:31:43.473 read: IOPS=27, BW=107KiB/s (109kB/s)(292KiB/2734msec) 00:31:43.473 slat (nsec): min=8008, max=31780, avg=21858.34, stdev=4524.21 00:31:43.473 clat (usec): min=358, max=41972, avg=37130.40, stdev=12042.61 00:31:43.473 lat (usec): min=381, max=41997, avg=37152.21, stdev=12045.04 00:31:43.473 clat percentiles (usec): 00:31:43.473 | 1.00th=[ 359], 5.00th=[ 400], 10.00th=[40633], 20.00th=[41157], 00:31:43.473 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:43.473 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:31:43.473 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:43.473 | 99.99th=[42206] 00:31:43.473 bw ( KiB/s): min= 96, max= 151, per=0.39%, avg=107.00, stdev=24.60, samples=5 00:31:43.473 iops : min= 24, max= 37, avg=26.60, stdev= 5.81, samples=5 00:31:43.473 lat (usec) : 500=8.11%, 750=1.35% 00:31:43.473 lat (msec) : 50=89.19% 00:31:43.473 cpu : usr=0.15%, sys=0.00%, ctx=74, majf=0, minf=2 00:31:43.473 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.473 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.473 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.473 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:43.473 00:31:43.473 Run status group 0 (all jobs): 00:31:43.473 READ: bw=27.0MiB/s (28.3MB/s), 107KiB/s-15.2MiB/s (109kB/s-15.9MB/s), io=91.2MiB (95.6MB), run=2734-3379msec 00:31:43.473 00:31:43.473 Disk stats (read/write): 00:31:43.473 nvme0n1: ios=12158/0, merge=0/0, ticks=2802/0, in_queue=2802, util=94.70% 00:31:43.473 nvme0n2: ios=152/0, merge=0/0, ticks=4212/0, in_queue=4212, util=99.43% 00:31:43.473 nvme0n3: ios=10705/0, merge=0/0, ticks=2625/0, in_queue=2625, util=96.55% 00:31:43.473 nvme0n4: ios=70/0, merge=0/0, ticks=2588/0, in_queue=2588, util=96.48% 00:31:43.733 14:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:43.733 14:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:43.733 14:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:43.733 14:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:43.992 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:43.992 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:44.251 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:44.251 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:44.510 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:44.510 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1692832 00:31:44.510 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:44.510 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:44.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:44.510 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:44.510 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:44.510 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:44.510 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:44.510 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:44.510 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:44.510 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:44.510 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:44.510 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:44.510 nvmf hotplug test: fio failed as expected 00:31:44.510 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:44.769 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:44.769 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:44.769 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:44.769 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:44.769 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:44.769 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:44.770 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:44.770 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:44.770 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:44.770 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:44.770 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:44.770 rmmod nvme_tcp 00:31:44.770 rmmod nvme_fabrics 00:31:44.770 rmmod nvme_keyring 00:31:44.770 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:45.029 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:45.029 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:45.029 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1690356 ']' 00:31:45.029 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1690356 00:31:45.029 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1690356 ']' 00:31:45.029 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1690356 00:31:45.029 14:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:45.029 14:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:45.029 14:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1690356 00:31:45.029 14:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:45.029 14:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:45.029 14:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1690356' 00:31:45.029 killing process with pid 1690356 00:31:45.029 14:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1690356 00:31:45.029 14:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1690356 00:31:45.029 14:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:45.029 14:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:45.029 14:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:45.029 14:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:45.029 14:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:45.029 14:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:45.029 14:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:45.029 14:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:45.029 14:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:45.029 14:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.029 14:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.029 14:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:47.569 00:31:47.569 real 0m25.995s 00:31:47.569 user 1m31.615s 00:31:47.569 sys 0m11.426s 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:47.569 ************************************ 00:31:47.569 END TEST nvmf_fio_target 00:31:47.569 ************************************ 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:47.569 ************************************ 00:31:47.569 START TEST nvmf_bdevio 00:31:47.569 ************************************ 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:47.569 * Looking for test storage... 00:31:47.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:47.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.569 --rc genhtml_branch_coverage=1 00:31:47.569 --rc genhtml_function_coverage=1 00:31:47.569 --rc genhtml_legend=1 00:31:47.569 --rc geninfo_all_blocks=1 00:31:47.569 --rc geninfo_unexecuted_blocks=1 00:31:47.569 00:31:47.569 ' 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:47.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.569 --rc genhtml_branch_coverage=1 00:31:47.569 --rc genhtml_function_coverage=1 00:31:47.569 --rc genhtml_legend=1 00:31:47.569 --rc geninfo_all_blocks=1 00:31:47.569 --rc geninfo_unexecuted_blocks=1 00:31:47.569 00:31:47.569 ' 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:47.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.569 --rc genhtml_branch_coverage=1 00:31:47.569 --rc genhtml_function_coverage=1 00:31:47.569 --rc genhtml_legend=1 00:31:47.569 --rc geninfo_all_blocks=1 00:31:47.569 --rc geninfo_unexecuted_blocks=1 00:31:47.569 00:31:47.569 ' 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:47.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.569 --rc genhtml_branch_coverage=1 00:31:47.569 --rc genhtml_function_coverage=1 00:31:47.569 --rc genhtml_legend=1 00:31:47.569 --rc geninfo_all_blocks=1 00:31:47.569 --rc geninfo_unexecuted_blocks=1 00:31:47.569 00:31:47.569 ' 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.569 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:47.570 14:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:54.145 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:54.145 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:54.145 Found net devices under 0000:86:00.0: cvl_0_0 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:54.145 Found net devices under 0000:86:00.1: cvl_0_1 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:54.145 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:54.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:54.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:31:54.146 00:31:54.146 --- 10.0.0.2 ping statistics --- 00:31:54.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.146 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:54.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:54.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:31:54.146 00:31:54.146 --- 10.0.0.1 ping statistics --- 00:31:54.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.146 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1697367 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1697367 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1697367 ']' 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:54.146 14:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.146 [2024-11-17 14:41:42.537555] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:54.146 [2024-11-17 14:41:42.538482] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:31:54.146 [2024-11-17 14:41:42.538519] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:54.146 [2024-11-17 14:41:42.619092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:54.146 [2024-11-17 14:41:42.660803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:54.146 [2024-11-17 14:41:42.660840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:54.146 [2024-11-17 14:41:42.660847] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:54.146 [2024-11-17 14:41:42.660853] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:54.146 [2024-11-17 14:41:42.660858] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:54.146 [2024-11-17 14:41:42.662422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:54.146 [2024-11-17 14:41:42.662527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:54.146 [2024-11-17 14:41:42.662653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:54.146 [2024-11-17 14:41:42.662654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:54.146 [2024-11-17 14:41:42.728799] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:54.146 [2024-11-17 14:41:42.729450] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:54.146 [2024-11-17 14:41:42.729807] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:54.146 [2024-11-17 14:41:42.730242] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:54.146 [2024-11-17 14:41:42.730278] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.404 [2024-11-17 14:41:43.415426] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.404 Malloc0 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.404 [2024-11-17 14:41:43.495703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:31:54.404 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:31:54.405 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:54.405 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:54.405 { 00:31:54.405 "params": { 00:31:54.405 "name": "Nvme$subsystem", 00:31:54.405 "trtype": "$TEST_TRANSPORT", 00:31:54.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:54.405 "adrfam": "ipv4", 00:31:54.405 "trsvcid": "$NVMF_PORT", 00:31:54.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:54.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:54.405 "hdgst": ${hdgst:-false}, 00:31:54.405 "ddgst": ${ddgst:-false} 00:31:54.405 }, 00:31:54.405 "method": "bdev_nvme_attach_controller" 00:31:54.405 } 00:31:54.405 EOF 00:31:54.405 )") 00:31:54.405 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:31:54.405 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:31:54.405 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:31:54.405 14:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:54.405 "params": { 00:31:54.405 "name": "Nvme1", 00:31:54.405 "trtype": "tcp", 00:31:54.405 "traddr": "10.0.0.2", 00:31:54.405 "adrfam": "ipv4", 00:31:54.405 "trsvcid": "4420", 00:31:54.405 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:54.405 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:54.405 "hdgst": false, 00:31:54.405 "ddgst": false 00:31:54.405 }, 00:31:54.405 "method": "bdev_nvme_attach_controller" 00:31:54.405 }' 00:31:54.405 [2024-11-17 14:41:43.548430] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:31:54.405 [2024-11-17 14:41:43.548482] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1697464 ] 00:31:54.663 [2024-11-17 14:41:43.626552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:54.663 [2024-11-17 14:41:43.671289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.663 [2024-11-17 14:41:43.671327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.663 [2024-11-17 14:41:43.671328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:54.921 I/O targets: 00:31:54.921 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:54.921 00:31:54.921 00:31:54.921 CUnit - A unit testing framework for C - Version 2.1-3 00:31:54.921 http://cunit.sourceforge.net/ 00:31:54.921 00:31:54.921 00:31:54.921 Suite: bdevio tests on: Nvme1n1 00:31:54.921 Test: blockdev write read block ...passed 00:31:54.921 Test: blockdev write zeroes read block ...passed 00:31:54.921 Test: blockdev write zeroes read no split ...passed 00:31:54.921 Test: blockdev write zeroes read split ...passed 00:31:54.921 Test: blockdev write zeroes read split partial ...passed 00:31:54.921 Test: blockdev reset ...[2024-11-17 14:41:44.133846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:54.921 [2024-11-17 14:41:44.133911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfaa340 (9): Bad file descriptor 00:31:55.179 [2024-11-17 14:41:44.227399] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:55.179 passed 00:31:55.179 Test: blockdev write read 8 blocks ...passed 00:31:55.179 Test: blockdev write read size > 128k ...passed 00:31:55.179 Test: blockdev write read invalid size ...passed 00:31:55.179 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:55.179 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:55.179 Test: blockdev write read max offset ...passed 00:31:55.179 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:55.179 Test: blockdev writev readv 8 blocks ...passed 00:31:55.437 Test: blockdev writev readv 30 x 1block ...passed 00:31:55.437 Test: blockdev writev readv block ...passed 00:31:55.437 Test: blockdev writev readv size > 128k ...passed 00:31:55.437 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:55.437 Test: blockdev comparev and writev ...[2024-11-17 14:41:44.478166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.437 [2024-11-17 14:41:44.478193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.437 [2024-11-17 14:41:44.478207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.437 [2024-11-17 14:41:44.478215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.437 [2024-11-17 14:41:44.478523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.437 [2024-11-17 14:41:44.478535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:55.437 [2024-11-17 14:41:44.478547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.437 [2024-11-17 14:41:44.478555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:55.437 [2024-11-17 14:41:44.478840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.437 [2024-11-17 14:41:44.478849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:55.437 [2024-11-17 14:41:44.478860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.437 [2024-11-17 14:41:44.478868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:55.437 [2024-11-17 14:41:44.479154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.437 [2024-11-17 14:41:44.479165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:55.437 [2024-11-17 14:41:44.479176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.437 [2024-11-17 14:41:44.479184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:55.437 passed 00:31:55.437 Test: blockdev nvme passthru rw ...passed 00:31:55.437 Test: blockdev nvme passthru vendor specific ...[2024-11-17 14:41:44.560664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:55.437 [2024-11-17 14:41:44.560678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:55.437 [2024-11-17 14:41:44.560790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:55.437 [2024-11-17 14:41:44.560799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:55.437 [2024-11-17 14:41:44.560905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:55.437 [2024-11-17 14:41:44.560914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:55.437 [2024-11-17 14:41:44.561016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:55.437 [2024-11-17 14:41:44.561025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:55.437 passed 00:31:55.437 Test: blockdev nvme admin passthru ...passed 00:31:55.437 Test: blockdev copy ...passed 00:31:55.437 00:31:55.437 Run Summary: Type Total Ran Passed Failed Inactive 00:31:55.437 suites 1 1 n/a 0 0 00:31:55.437 tests 23 23 23 0 0 00:31:55.437 asserts 152 152 152 0 n/a 00:31:55.437 00:31:55.437 Elapsed time = 1.273 seconds 00:31:55.696 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:55.696 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.696 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:55.696 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.696 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:55.696 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:55.696 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:55.696 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:55.696 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:55.696 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:55.696 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:55.696 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:55.696 rmmod nvme_tcp 00:31:55.696 rmmod nvme_fabrics 00:31:55.696 rmmod nvme_keyring 00:31:55.696 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:55.696 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:55.696 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:55.696 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1697367 ']' 00:31:55.696 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1697367 00:31:55.696 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1697367 ']' 00:31:55.696 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1697367 00:31:55.696 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:31:55.696 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:55.697 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1697367 00:31:55.697 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:31:55.697 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:31:55.697 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1697367' 00:31:55.697 killing process with pid 1697367 00:31:55.697 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1697367 00:31:55.697 14:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1697367 00:31:55.956 14:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:55.956 14:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:55.956 14:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:55.956 14:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:55.956 14:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:31:55.956 14:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:31:55.956 14:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:55.956 14:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:55.956 14:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:55.956 14:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.956 14:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:55.956 14:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.496 14:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:58.496 00:31:58.496 real 0m10.769s 00:31:58.496 user 0m10.010s 00:31:58.496 sys 0m5.374s 00:31:58.496 14:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:58.496 14:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:58.496 ************************************ 00:31:58.496 END TEST nvmf_bdevio 00:31:58.496 ************************************ 00:31:58.496 14:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:31:58.496 00:31:58.496 real 4m34.214s 00:31:58.496 user 9m12.977s 00:31:58.496 sys 1m53.841s 00:31:58.496 14:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:58.496 14:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:58.496 ************************************ 00:31:58.496 END TEST nvmf_target_core_interrupt_mode 00:31:58.496 ************************************ 00:31:58.496 14:41:47 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:58.496 14:41:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:58.496 14:41:47 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:58.496 14:41:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:58.496 ************************************ 00:31:58.496 START TEST nvmf_interrupt 00:31:58.496 ************************************ 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:58.496 * Looking for test storage... 00:31:58.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:31:58.496 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:58.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.497 --rc genhtml_branch_coverage=1 00:31:58.497 --rc genhtml_function_coverage=1 00:31:58.497 --rc genhtml_legend=1 00:31:58.497 --rc geninfo_all_blocks=1 00:31:58.497 --rc geninfo_unexecuted_blocks=1 00:31:58.497 00:31:58.497 ' 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:58.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.497 --rc genhtml_branch_coverage=1 00:31:58.497 --rc genhtml_function_coverage=1 00:31:58.497 --rc genhtml_legend=1 00:31:58.497 --rc geninfo_all_blocks=1 00:31:58.497 --rc geninfo_unexecuted_blocks=1 00:31:58.497 00:31:58.497 ' 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:58.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.497 --rc genhtml_branch_coverage=1 00:31:58.497 --rc genhtml_function_coverage=1 00:31:58.497 --rc genhtml_legend=1 00:31:58.497 --rc geninfo_all_blocks=1 00:31:58.497 --rc geninfo_unexecuted_blocks=1 00:31:58.497 00:31:58.497 ' 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:58.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.497 --rc genhtml_branch_coverage=1 00:31:58.497 --rc genhtml_function_coverage=1 00:31:58.497 --rc genhtml_legend=1 00:31:58.497 --rc geninfo_all_blocks=1 00:31:58.497 --rc geninfo_unexecuted_blocks=1 00:31:58.497 00:31:58.497 ' 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:31:58.497 14:41:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:05.072 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:05.072 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:05.072 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:05.072 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:05.072 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:05.072 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:05.072 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:05.072 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:05.072 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:05.072 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:05.073 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:05.073 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:05.073 Found net devices under 0000:86:00.0: cvl_0_0 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:05.073 Found net devices under 0000:86:00.1: cvl_0_1 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:05.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:05.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:32:05.073 00:32:05.073 --- 10.0.0.2 ping statistics --- 00:32:05.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.073 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:05.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:05.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:32:05.073 00:32:05.073 --- 10.0.0.1 ping statistics --- 00:32:05.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.073 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:05.073 14:41:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1701225 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1701225 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1701225 ']' 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:05.074 [2024-11-17 14:41:53.443140] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:05.074 [2024-11-17 14:41:53.444062] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:32:05.074 [2024-11-17 14:41:53.444098] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:05.074 [2024-11-17 14:41:53.522985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:05.074 [2024-11-17 14:41:53.564467] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:05.074 [2024-11-17 14:41:53.564507] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:05.074 [2024-11-17 14:41:53.564514] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:05.074 [2024-11-17 14:41:53.564520] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:05.074 [2024-11-17 14:41:53.564525] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:05.074 [2024-11-17 14:41:53.565728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.074 [2024-11-17 14:41:53.565729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.074 [2024-11-17 14:41:53.632034] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:05.074 [2024-11-17 14:41:53.632573] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:05.074 [2024-11-17 14:41:53.632779] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:05.074 5000+0 records in 00:32:05.074 5000+0 records out 00:32:05.074 10240000 bytes (10 MB, 9.8 MiB) copied, 0.017513 s, 585 MB/s 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:05.074 AIO0 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:05.074 [2024-11-17 14:41:53.762540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:05.074 [2024-11-17 14:41:53.802856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1701225 0 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1701225 0 idle 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1701225 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1701225 -w 256 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1701225 root 20 0 128.2g 46080 33792 S 6.7 0.0 0:00.25 reactor_0' 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1701225 root 20 0 128.2g 46080 33792 S 6.7 0.0 0:00.25 reactor_0 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1701225 1 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1701225 1 idle 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1701225 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:05.074 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1701225 -w 256 00:32:05.075 14:41:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1701229 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1701229 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1701386 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1701225 0 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1701225 0 busy 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1701225 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1701225 -w 256 00:32:05.075 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1701225 root 20 0 128.2g 46848 33792 R 37.5 0.0 0:00.31 reactor_0' 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1701225 root 20 0 128.2g 46848 33792 R 37.5 0.0 0:00.31 reactor_0 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=37.5 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=37 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1701225 1 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1701225 1 busy 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1701225 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1701225 -w 256 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1701229 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.21 reactor_1' 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1701229 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.21 reactor_1 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:05.333 14:41:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1701386 00:32:15.302 Initializing NVMe Controllers 00:32:15.302 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:15.302 Controller IO queue size 256, less than required. 00:32:15.302 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:15.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:15.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:15.302 Initialization complete. Launching workers. 00:32:15.302 ======================================================== 00:32:15.302 Latency(us) 00:32:15.302 Device Information : IOPS MiB/s Average min max 00:32:15.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 15889.39 62.07 16120.71 3776.01 30395.07 00:32:15.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16116.48 62.96 15889.48 7180.32 26682.53 00:32:15.302 ======================================================== 00:32:15.302 Total : 32005.87 125.02 16004.27 3776.01 30395.07 00:32:15.302 00:32:15.302 14:42:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:15.302 14:42:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1701225 0 00:32:15.303 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1701225 0 idle 00:32:15.303 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1701225 00:32:15.303 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:15.303 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:15.303 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:15.303 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:15.303 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:15.303 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:15.303 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:15.303 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:15.303 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:15.303 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1701225 -w 256 00:32:15.303 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:15.562 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1701225 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.24 reactor_0' 00:32:15.562 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1701225 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.24 reactor_0 00:32:15.562 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:15.562 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:15.562 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:15.562 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:15.562 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:15.562 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:15.562 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:15.562 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:15.562 14:42:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:15.562 14:42:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1701225 1 00:32:15.562 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1701225 1 idle 00:32:15.562 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1701225 00:32:15.562 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:15.562 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:15.562 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:15.562 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:15.562 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:15.562 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:15.562 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:15.563 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:15.563 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:15.563 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1701225 -w 256 00:32:15.563 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:15.563 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1701229 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:32:15.563 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1701229 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:32:15.563 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:15.563 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:15.563 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:15.563 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:15.563 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:15.563 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:15.563 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:15.563 14:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:15.563 14:42:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:16.132 14:42:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:16.132 14:42:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:16.132 14:42:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:16.132 14:42:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:16.132 14:42:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1701225 0 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1701225 0 idle 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1701225 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1701225 -w 256 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1701225 root 20 0 128.2g 72960 33792 S 6.2 0.0 0:20.51 reactor_0' 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1701225 root 20 0 128.2g 72960 33792 S 6.2 0.0 0:20.51 reactor_0 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1701225 1 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1701225 1 idle 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1701225 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1701225 -w 256 00:32:18.169 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:18.428 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1701229 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.10 reactor_1' 00:32:18.428 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1701229 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.10 reactor_1 00:32:18.428 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:18.428 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:18.428 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:18.428 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:18.428 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:18.428 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:18.428 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:18.428 14:42:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:18.428 14:42:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:18.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:18.428 14:42:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:18.428 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:18.428 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:18.428 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:18.687 rmmod nvme_tcp 00:32:18.687 rmmod nvme_fabrics 00:32:18.687 rmmod nvme_keyring 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1701225 ']' 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1701225 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1701225 ']' 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1701225 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1701225 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1701225' 00:32:18.687 killing process with pid 1701225 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1701225 00:32:18.687 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1701225 00:32:18.946 14:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:18.946 14:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:18.946 14:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:18.946 14:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:18.946 14:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:18.946 14:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:18.946 14:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:18.946 14:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:18.946 14:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:18.946 14:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.946 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:18.946 14:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.852 14:42:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:20.852 00:32:20.852 real 0m22.817s 00:32:20.852 user 0m39.745s 00:32:20.852 sys 0m8.387s 00:32:20.852 14:42:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:20.852 14:42:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:20.852 ************************************ 00:32:20.852 END TEST nvmf_interrupt 00:32:20.852 ************************************ 00:32:21.111 00:32:21.111 real 27m27.211s 00:32:21.111 user 56m34.280s 00:32:21.111 sys 9m22.738s 00:32:21.111 14:42:10 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:21.112 14:42:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:21.112 ************************************ 00:32:21.112 END TEST nvmf_tcp 00:32:21.112 ************************************ 00:32:21.112 14:42:10 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:21.112 14:42:10 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:21.112 14:42:10 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:21.112 14:42:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:21.112 14:42:10 -- common/autotest_common.sh@10 -- # set +x 00:32:21.112 ************************************ 00:32:21.112 START TEST spdkcli_nvmf_tcp 00:32:21.112 ************************************ 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:21.112 * Looking for test storage... 00:32:21.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:21.112 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:21.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.372 --rc genhtml_branch_coverage=1 00:32:21.372 --rc genhtml_function_coverage=1 00:32:21.372 --rc genhtml_legend=1 00:32:21.372 --rc geninfo_all_blocks=1 00:32:21.372 --rc geninfo_unexecuted_blocks=1 00:32:21.372 00:32:21.372 ' 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:21.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.372 --rc genhtml_branch_coverage=1 00:32:21.372 --rc genhtml_function_coverage=1 00:32:21.372 --rc genhtml_legend=1 00:32:21.372 --rc geninfo_all_blocks=1 00:32:21.372 --rc geninfo_unexecuted_blocks=1 00:32:21.372 00:32:21.372 ' 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:21.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.372 --rc genhtml_branch_coverage=1 00:32:21.372 --rc genhtml_function_coverage=1 00:32:21.372 --rc genhtml_legend=1 00:32:21.372 --rc geninfo_all_blocks=1 00:32:21.372 --rc geninfo_unexecuted_blocks=1 00:32:21.372 00:32:21.372 ' 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:21.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.372 --rc genhtml_branch_coverage=1 00:32:21.372 --rc genhtml_function_coverage=1 00:32:21.372 --rc genhtml_legend=1 00:32:21.372 --rc geninfo_all_blocks=1 00:32:21.372 --rc geninfo_unexecuted_blocks=1 00:32:21.372 00:32:21.372 ' 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:21.372 14:42:10 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:21.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1704687 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1704687 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1704687 ']' 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:21.373 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:21.373 [2024-11-17 14:42:10.419335] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:32:21.373 [2024-11-17 14:42:10.419388] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1704687 ] 00:32:21.373 [2024-11-17 14:42:10.492414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:21.373 [2024-11-17 14:42:10.536753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.373 [2024-11-17 14:42:10.536757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.633 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:21.633 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:21.633 14:42:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:21.633 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:21.633 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:21.633 14:42:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:21.633 14:42:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:21.633 14:42:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:21.633 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:21.633 14:42:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:21.633 14:42:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:21.633 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:21.633 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:21.633 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:21.633 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:21.633 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:21.633 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:21.633 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:21.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:21.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:21.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:21.633 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:21.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:21.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:21.633 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:21.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:21.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:21.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:21.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:21.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:21.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:21.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:21.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:21.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:21.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:21.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:21.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:21.633 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:21.633 ' 00:32:24.171 [2024-11-17 14:42:13.355544] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:25.550 [2024-11-17 14:42:14.696076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:28.087 [2024-11-17 14:42:17.171666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:30.643 [2024-11-17 14:42:19.322349] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:32.021 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:32.021 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:32.021 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:32.021 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:32.021 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:32.021 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:32.021 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:32.021 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:32.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:32.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:32.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:32.021 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:32.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:32.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:32.021 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:32.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:32.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:32.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:32.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:32.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:32.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:32.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:32.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:32.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:32.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:32.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:32.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:32.021 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:32.021 14:42:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:32.021 14:42:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:32.021 14:42:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:32.021 14:42:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:32.021 14:42:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.021 14:42:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:32.021 14:42:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:32.021 14:42:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:32.590 14:42:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:32.590 14:42:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:32.590 14:42:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:32.590 14:42:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:32.590 14:42:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:32.590 14:42:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:32.590 14:42:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.590 14:42:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:32.590 14:42:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:32.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:32.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:32.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:32.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:32.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:32.590 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:32.590 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:32.590 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:32.590 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:32.590 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:32.590 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:32.590 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:32.590 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:32.590 ' 00:32:39.164 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:39.164 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:39.164 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:39.164 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:39.164 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:39.164 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:39.164 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:39.164 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:39.164 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:39.164 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:39.164 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:39.164 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:39.164 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:39.164 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1704687 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1704687 ']' 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1704687 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1704687 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1704687' 00:32:39.164 killing process with pid 1704687 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1704687 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1704687 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1704687 ']' 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1704687 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1704687 ']' 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1704687 00:32:39.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1704687) - No such process 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1704687 is not found' 00:32:39.164 Process with pid 1704687 is not found 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:39.164 00:32:39.164 real 0m17.293s 00:32:39.164 user 0m38.134s 00:32:39.164 sys 0m0.769s 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:39.164 14:42:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:39.164 ************************************ 00:32:39.164 END TEST spdkcli_nvmf_tcp 00:32:39.164 ************************************ 00:32:39.164 14:42:27 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:39.164 14:42:27 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:39.164 14:42:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:39.164 14:42:27 -- common/autotest_common.sh@10 -- # set +x 00:32:39.164 ************************************ 00:32:39.164 START TEST nvmf_identify_passthru 00:32:39.164 ************************************ 00:32:39.164 14:42:27 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:39.164 * Looking for test storage... 00:32:39.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:39.164 14:42:27 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:39.164 14:42:27 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:32:39.164 14:42:27 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:39.164 14:42:27 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:39.164 14:42:27 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:39.164 14:42:27 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:39.164 14:42:27 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:39.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.164 --rc genhtml_branch_coverage=1 00:32:39.164 --rc genhtml_function_coverage=1 00:32:39.164 --rc genhtml_legend=1 00:32:39.164 --rc geninfo_all_blocks=1 00:32:39.164 --rc geninfo_unexecuted_blocks=1 00:32:39.164 00:32:39.164 ' 00:32:39.164 14:42:27 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:39.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.165 --rc genhtml_branch_coverage=1 00:32:39.165 --rc genhtml_function_coverage=1 00:32:39.165 --rc genhtml_legend=1 00:32:39.165 --rc geninfo_all_blocks=1 00:32:39.165 --rc geninfo_unexecuted_blocks=1 00:32:39.165 00:32:39.165 ' 00:32:39.165 14:42:27 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:39.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.165 --rc genhtml_branch_coverage=1 00:32:39.165 --rc genhtml_function_coverage=1 00:32:39.165 --rc genhtml_legend=1 00:32:39.165 --rc geninfo_all_blocks=1 00:32:39.165 --rc geninfo_unexecuted_blocks=1 00:32:39.165 00:32:39.165 ' 00:32:39.165 14:42:27 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:39.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.165 --rc genhtml_branch_coverage=1 00:32:39.165 --rc genhtml_function_coverage=1 00:32:39.165 --rc genhtml_legend=1 00:32:39.165 --rc geninfo_all_blocks=1 00:32:39.165 --rc geninfo_unexecuted_blocks=1 00:32:39.165 00:32:39.165 ' 00:32:39.165 14:42:27 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.165 14:42:27 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:39.165 14:42:27 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.165 14:42:27 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.165 14:42:27 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.165 14:42:27 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.165 14:42:27 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.165 14:42:27 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.165 14:42:27 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:39.165 14:42:27 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:39.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:39.165 14:42:27 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.165 14:42:27 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:39.165 14:42:27 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.165 14:42:27 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.165 14:42:27 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.165 14:42:27 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.165 14:42:27 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.165 14:42:27 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.165 14:42:27 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:39.165 14:42:27 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.165 14:42:27 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.165 14:42:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:39.165 14:42:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:39.165 14:42:27 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:39.165 14:42:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:44.444 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:44.444 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:44.444 Found net devices under 0000:86:00.0: cvl_0_0 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:44.444 Found net devices under 0000:86:00.1: cvl_0_1 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:44.444 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:44.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:44.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:32:44.445 00:32:44.445 --- 10.0.0.2 ping statistics --- 00:32:44.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.445 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:44.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:44.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:32:44.445 00:32:44.445 --- 10.0.0.1 ping statistics --- 00:32:44.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.445 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:44.445 14:42:33 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:44.445 14:42:33 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:44.445 14:42:33 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:44.445 14:42:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:44.445 14:42:33 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:44.445 14:42:33 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:44.445 14:42:33 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:44.445 14:42:33 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:44.445 14:42:33 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:44.445 14:42:33 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:44.445 14:42:33 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:44.445 14:42:33 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:44.445 14:42:33 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:44.445 14:42:33 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:44.705 14:42:33 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:44.705 14:42:33 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:44.705 14:42:33 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:32:44.705 14:42:33 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:32:44.705 14:42:33 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:32:44.705 14:42:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:44.705 14:42:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:44.705 14:42:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:48.900 14:42:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:32:48.900 14:42:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:48.900 14:42:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:48.900 14:42:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:53.283 14:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:53.283 14:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:53.283 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:53.283 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:53.283 14:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:53.283 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:53.283 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:53.283 14:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1711860 00:32:53.283 14:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:53.283 14:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:53.283 14:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1711860 00:32:53.283 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1711860 ']' 00:32:53.283 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:53.283 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:53.283 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:53.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:53.283 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:53.283 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:53.283 [2024-11-17 14:42:42.147750] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:32:53.283 [2024-11-17 14:42:42.147796] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:53.283 [2024-11-17 14:42:42.227270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:53.283 [2024-11-17 14:42:42.270752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:53.283 [2024-11-17 14:42:42.270792] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:53.284 [2024-11-17 14:42:42.270799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:53.284 [2024-11-17 14:42:42.270806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:53.284 [2024-11-17 14:42:42.270810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:53.284 [2024-11-17 14:42:42.272434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.284 [2024-11-17 14:42:42.272475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:53.284 [2024-11-17 14:42:42.272589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.284 [2024-11-17 14:42:42.272590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:53.284 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:53.284 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:32:53.284 14:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:53.284 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.284 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:53.284 INFO: Log level set to 20 00:32:53.284 INFO: Requests: 00:32:53.284 { 00:32:53.284 "jsonrpc": "2.0", 00:32:53.284 "method": "nvmf_set_config", 00:32:53.284 "id": 1, 00:32:53.284 "params": { 00:32:53.284 "admin_cmd_passthru": { 00:32:53.284 "identify_ctrlr": true 00:32:53.284 } 00:32:53.284 } 00:32:53.284 } 00:32:53.284 00:32:53.284 INFO: response: 00:32:53.284 { 00:32:53.284 "jsonrpc": "2.0", 00:32:53.284 "id": 1, 00:32:53.284 "result": true 00:32:53.284 } 00:32:53.284 00:32:53.284 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.284 14:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:53.284 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.284 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:53.284 INFO: Setting log level to 20 00:32:53.284 INFO: Setting log level to 20 00:32:53.284 INFO: Log level set to 20 00:32:53.284 INFO: Log level set to 20 00:32:53.284 INFO: Requests: 00:32:53.284 { 00:32:53.284 "jsonrpc": "2.0", 00:32:53.284 "method": "framework_start_init", 00:32:53.284 "id": 1 00:32:53.284 } 00:32:53.284 00:32:53.284 INFO: Requests: 00:32:53.284 { 00:32:53.284 "jsonrpc": "2.0", 00:32:53.284 "method": "framework_start_init", 00:32:53.284 "id": 1 00:32:53.284 } 00:32:53.284 00:32:53.284 [2024-11-17 14:42:42.383658] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:53.284 INFO: response: 00:32:53.284 { 00:32:53.284 "jsonrpc": "2.0", 00:32:53.284 "id": 1, 00:32:53.284 "result": true 00:32:53.284 } 00:32:53.284 00:32:53.284 INFO: response: 00:32:53.284 { 00:32:53.284 "jsonrpc": "2.0", 00:32:53.284 "id": 1, 00:32:53.284 "result": true 00:32:53.284 } 00:32:53.284 00:32:53.284 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.284 14:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:53.284 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.284 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:53.284 INFO: Setting log level to 40 00:32:53.284 INFO: Setting log level to 40 00:32:53.284 INFO: Setting log level to 40 00:32:53.284 [2024-11-17 14:42:42.396989] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:53.284 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.284 14:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:53.284 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:53.284 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:53.284 14:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:32:53.284 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.284 14:42:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:56.575 Nvme0n1 00:32:56.575 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.575 14:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:56.575 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.575 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:56.575 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.575 14:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:56.575 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.575 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:56.575 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.575 14:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:56.575 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.575 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:56.575 [2024-11-17 14:42:45.307457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:56.575 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.575 14:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:56.575 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.575 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:56.575 [ 00:32:56.575 { 00:32:56.575 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:56.575 "subtype": "Discovery", 00:32:56.575 "listen_addresses": [], 00:32:56.575 "allow_any_host": true, 00:32:56.575 "hosts": [] 00:32:56.575 }, 00:32:56.575 { 00:32:56.575 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:56.575 "subtype": "NVMe", 00:32:56.575 "listen_addresses": [ 00:32:56.575 { 00:32:56.575 "trtype": "TCP", 00:32:56.575 "adrfam": "IPv4", 00:32:56.575 "traddr": "10.0.0.2", 00:32:56.575 "trsvcid": "4420" 00:32:56.575 } 00:32:56.575 ], 00:32:56.575 "allow_any_host": true, 00:32:56.575 "hosts": [], 00:32:56.575 "serial_number": "SPDK00000000000001", 00:32:56.575 "model_number": "SPDK bdev Controller", 00:32:56.575 "max_namespaces": 1, 00:32:56.575 "min_cntlid": 1, 00:32:56.575 "max_cntlid": 65519, 00:32:56.575 "namespaces": [ 00:32:56.575 { 00:32:56.575 "nsid": 1, 00:32:56.575 "bdev_name": "Nvme0n1", 00:32:56.575 "name": "Nvme0n1", 00:32:56.575 "nguid": "9AAA8ACDC3C245A1A59CF7FA3A8AC7FA", 00:32:56.575 "uuid": "9aaa8acd-c3c2-45a1-a59c-f7fa3a8ac7fa" 00:32:56.575 } 00:32:56.575 ] 00:32:56.576 } 00:32:56.576 ] 00:32:56.576 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.576 14:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:56.576 14:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:56.576 14:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:56.576 14:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:32:56.576 14:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:56.576 14:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:56.576 14:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:56.576 14:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:56.576 14:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:32:56.576 14:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:56.576 14:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:56.576 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.576 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:56.576 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.576 14:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:56.576 14:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:56.576 14:42:45 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:56.576 14:42:45 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:56.576 14:42:45 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:56.576 14:42:45 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:56.576 14:42:45 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:56.576 14:42:45 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:56.576 rmmod nvme_tcp 00:32:56.576 rmmod nvme_fabrics 00:32:56.576 rmmod nvme_keyring 00:32:56.836 14:42:45 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:56.836 14:42:45 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:56.836 14:42:45 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:56.836 14:42:45 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1711860 ']' 00:32:56.836 14:42:45 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1711860 00:32:56.836 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1711860 ']' 00:32:56.836 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1711860 00:32:56.836 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:32:56.836 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:56.836 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1711860 00:32:56.836 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:56.836 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:56.836 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1711860' 00:32:56.836 killing process with pid 1711860 00:32:56.836 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1711860 00:32:56.836 14:42:45 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1711860 00:32:58.214 14:42:47 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:58.214 14:42:47 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:58.214 14:42:47 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:58.214 14:42:47 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:32:58.214 14:42:47 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:32:58.214 14:42:47 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:58.214 14:42:47 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:32:58.214 14:42:47 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:58.214 14:42:47 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:58.214 14:42:47 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:58.215 14:42:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:58.215 14:42:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:00.751 14:42:49 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:00.751 00:33:00.751 real 0m21.900s 00:33:00.751 user 0m26.966s 00:33:00.751 sys 0m6.219s 00:33:00.751 14:42:49 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:00.751 14:42:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:00.751 ************************************ 00:33:00.751 END TEST nvmf_identify_passthru 00:33:00.751 ************************************ 00:33:00.751 14:42:49 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:00.751 14:42:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:00.751 14:42:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:00.751 14:42:49 -- common/autotest_common.sh@10 -- # set +x 00:33:00.751 ************************************ 00:33:00.751 START TEST nvmf_dif 00:33:00.751 ************************************ 00:33:00.751 14:42:49 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:00.751 * Looking for test storage... 00:33:00.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:00.751 14:42:49 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:00.751 14:42:49 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:33:00.751 14:42:49 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:00.751 14:42:49 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:00.751 14:42:49 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:00.751 14:42:49 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:00.751 14:42:49 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:00.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:00.751 --rc genhtml_branch_coverage=1 00:33:00.751 --rc genhtml_function_coverage=1 00:33:00.751 --rc genhtml_legend=1 00:33:00.751 --rc geninfo_all_blocks=1 00:33:00.751 --rc geninfo_unexecuted_blocks=1 00:33:00.751 00:33:00.751 ' 00:33:00.751 14:42:49 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:00.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:00.751 --rc genhtml_branch_coverage=1 00:33:00.751 --rc genhtml_function_coverage=1 00:33:00.751 --rc genhtml_legend=1 00:33:00.751 --rc geninfo_all_blocks=1 00:33:00.751 --rc geninfo_unexecuted_blocks=1 00:33:00.751 00:33:00.751 ' 00:33:00.751 14:42:49 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:00.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:00.751 --rc genhtml_branch_coverage=1 00:33:00.751 --rc genhtml_function_coverage=1 00:33:00.751 --rc genhtml_legend=1 00:33:00.751 --rc geninfo_all_blocks=1 00:33:00.751 --rc geninfo_unexecuted_blocks=1 00:33:00.751 00:33:00.751 ' 00:33:00.751 14:42:49 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:00.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:00.751 --rc genhtml_branch_coverage=1 00:33:00.751 --rc genhtml_function_coverage=1 00:33:00.751 --rc genhtml_legend=1 00:33:00.751 --rc geninfo_all_blocks=1 00:33:00.751 --rc geninfo_unexecuted_blocks=1 00:33:00.751 00:33:00.751 ' 00:33:00.751 14:42:49 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:00.751 14:42:49 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:00.751 14:42:49 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:00.751 14:42:49 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:00.751 14:42:49 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:00.751 14:42:49 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:00.751 14:42:49 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:00.751 14:42:49 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:00.751 14:42:49 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:00.751 14:42:49 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:00.751 14:42:49 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:00.751 14:42:49 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:00.751 14:42:49 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:00.751 14:42:49 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:00.751 14:42:49 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:00.752 14:42:49 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:00.752 14:42:49 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:00.752 14:42:49 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:00.752 14:42:49 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:00.752 14:42:49 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.752 14:42:49 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.752 14:42:49 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.752 14:42:49 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:00.752 14:42:49 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:00.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:00.752 14:42:49 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:00.752 14:42:49 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:00.752 14:42:49 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:00.752 14:42:49 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:00.752 14:42:49 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.752 14:42:49 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:00.752 14:42:49 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:00.752 14:42:49 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:00.752 14:42:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:07.326 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:07.326 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:07.326 Found net devices under 0000:86:00.0: cvl_0_0 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.326 14:42:55 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:07.327 Found net devices under 0000:86:00.1: cvl_0_1 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:07.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:07.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:33:07.327 00:33:07.327 --- 10.0.0.2 ping statistics --- 00:33:07.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.327 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:07.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:07.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:33:07.327 00:33:07.327 --- 10.0.0.1 ping statistics --- 00:33:07.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.327 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:07.327 14:42:55 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:09.246 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:09.246 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:09.246 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:09.246 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:09.246 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:09.246 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:09.246 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:09.246 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:09.246 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:09.246 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:09.246 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:09.246 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:09.246 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:09.246 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:09.246 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:09.246 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:09.246 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:09.246 14:42:58 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:09.246 14:42:58 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:09.246 14:42:58 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:09.246 14:42:58 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:09.246 14:42:58 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:09.246 14:42:58 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:09.246 14:42:58 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:09.246 14:42:58 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:09.246 14:42:58 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:09.246 14:42:58 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:09.246 14:42:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:09.505 14:42:58 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1717432 00:33:09.505 14:42:58 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1717432 00:33:09.505 14:42:58 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:09.505 14:42:58 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1717432 ']' 00:33:09.505 14:42:58 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:09.505 14:42:58 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:09.505 14:42:58 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:09.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:09.505 14:42:58 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:09.505 14:42:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:09.505 [2024-11-17 14:42:58.516581] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:33:09.505 [2024-11-17 14:42:58.516625] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:09.505 [2024-11-17 14:42:58.599714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.505 [2024-11-17 14:42:58.640608] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:09.505 [2024-11-17 14:42:58.640643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:09.505 [2024-11-17 14:42:58.640650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:09.505 [2024-11-17 14:42:58.640657] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:09.505 [2024-11-17 14:42:58.640662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:09.505 [2024-11-17 14:42:58.641220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.765 14:42:58 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:09.765 14:42:58 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:09.765 14:42:58 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:09.765 14:42:58 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:09.765 14:42:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:09.765 14:42:58 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:09.765 14:42:58 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:09.765 14:42:58 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:09.765 14:42:58 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.765 14:42:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:09.765 [2024-11-17 14:42:58.771482] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:09.765 14:42:58 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.765 14:42:58 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:09.765 14:42:58 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:09.765 14:42:58 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:09.765 14:42:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:09.765 ************************************ 00:33:09.765 START TEST fio_dif_1_default 00:33:09.765 ************************************ 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:09.765 bdev_null0 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:09.765 [2024-11-17 14:42:58.843785] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:09.765 { 00:33:09.765 "params": { 00:33:09.765 "name": "Nvme$subsystem", 00:33:09.765 "trtype": "$TEST_TRANSPORT", 00:33:09.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:09.765 "adrfam": "ipv4", 00:33:09.765 "trsvcid": "$NVMF_PORT", 00:33:09.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:09.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:09.765 "hdgst": ${hdgst:-false}, 00:33:09.765 "ddgst": ${ddgst:-false} 00:33:09.765 }, 00:33:09.765 "method": "bdev_nvme_attach_controller" 00:33:09.765 } 00:33:09.765 EOF 00:33:09.765 )") 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:09.765 14:42:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:09.766 "params": { 00:33:09.766 "name": "Nvme0", 00:33:09.766 "trtype": "tcp", 00:33:09.766 "traddr": "10.0.0.2", 00:33:09.766 "adrfam": "ipv4", 00:33:09.766 "trsvcid": "4420", 00:33:09.766 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:09.766 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:09.766 "hdgst": false, 00:33:09.766 "ddgst": false 00:33:09.766 }, 00:33:09.766 "method": "bdev_nvme_attach_controller" 00:33:09.766 }' 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:09.766 14:42:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:10.025 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:10.025 fio-3.35 00:33:10.025 Starting 1 thread 00:33:22.238 00:33:22.238 filename0: (groupid=0, jobs=1): err= 0: pid=1717701: Sun Nov 17 14:43:09 2024 00:33:22.238 read: IOPS=96, BW=386KiB/s (395kB/s)(3856KiB/10002msec) 00:33:22.238 slat (nsec): min=6028, max=39465, avg=6552.06, stdev=1961.59 00:33:22.238 clat (usec): min=550, max=43458, avg=41481.29, stdev=2684.65 00:33:22.238 lat (usec): min=571, max=43486, avg=41487.84, stdev=2683.47 00:33:22.238 clat percentiles (usec): 00:33:22.238 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:22.238 | 30.00th=[41157], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:33:22.238 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:22.238 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:33:22.238 | 99.99th=[43254] 00:33:22.238 bw ( KiB/s): min= 352, max= 416, per=99.86%, avg=385.68, stdev=12.95, samples=19 00:33:22.238 iops : min= 88, max= 104, avg=96.42, stdev= 3.24, samples=19 00:33:22.238 lat (usec) : 750=0.41% 00:33:22.238 lat (msec) : 50=99.59% 00:33:22.238 cpu : usr=92.59%, sys=7.12%, ctx=13, majf=0, minf=0 00:33:22.238 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:22.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.239 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.239 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:22.239 00:33:22.239 Run status group 0 (all jobs): 00:33:22.239 READ: bw=386KiB/s (395kB/s), 386KiB/s-386KiB/s (395kB/s-395kB/s), io=3856KiB (3949kB), run=10002-10002msec 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.239 00:33:22.239 real 0m11.249s 00:33:22.239 user 0m15.969s 00:33:22.239 sys 0m1.068s 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:22.239 ************************************ 00:33:22.239 END TEST fio_dif_1_default 00:33:22.239 ************************************ 00:33:22.239 14:43:10 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:22.239 14:43:10 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:22.239 14:43:10 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:22.239 14:43:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:22.239 ************************************ 00:33:22.239 START TEST fio_dif_1_multi_subsystems 00:33:22.239 ************************************ 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:22.239 bdev_null0 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:22.239 [2024-11-17 14:43:10.165435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:22.239 bdev_null1 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:22.239 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:22.239 { 00:33:22.239 "params": { 00:33:22.239 "name": "Nvme$subsystem", 00:33:22.239 "trtype": "$TEST_TRANSPORT", 00:33:22.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:22.239 "adrfam": "ipv4", 00:33:22.239 "trsvcid": "$NVMF_PORT", 00:33:22.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:22.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:22.239 "hdgst": ${hdgst:-false}, 00:33:22.239 "ddgst": ${ddgst:-false} 00:33:22.239 }, 00:33:22.239 "method": "bdev_nvme_attach_controller" 00:33:22.240 } 00:33:22.240 EOF 00:33:22.240 )") 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:22.240 { 00:33:22.240 "params": { 00:33:22.240 "name": "Nvme$subsystem", 00:33:22.240 "trtype": "$TEST_TRANSPORT", 00:33:22.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:22.240 "adrfam": "ipv4", 00:33:22.240 "trsvcid": "$NVMF_PORT", 00:33:22.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:22.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:22.240 "hdgst": ${hdgst:-false}, 00:33:22.240 "ddgst": ${ddgst:-false} 00:33:22.240 }, 00:33:22.240 "method": "bdev_nvme_attach_controller" 00:33:22.240 } 00:33:22.240 EOF 00:33:22.240 )") 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:22.240 "params": { 00:33:22.240 "name": "Nvme0", 00:33:22.240 "trtype": "tcp", 00:33:22.240 "traddr": "10.0.0.2", 00:33:22.240 "adrfam": "ipv4", 00:33:22.240 "trsvcid": "4420", 00:33:22.240 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:22.240 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:22.240 "hdgst": false, 00:33:22.240 "ddgst": false 00:33:22.240 }, 00:33:22.240 "method": "bdev_nvme_attach_controller" 00:33:22.240 },{ 00:33:22.240 "params": { 00:33:22.240 "name": "Nvme1", 00:33:22.240 "trtype": "tcp", 00:33:22.240 "traddr": "10.0.0.2", 00:33:22.240 "adrfam": "ipv4", 00:33:22.240 "trsvcid": "4420", 00:33:22.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:22.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:22.240 "hdgst": false, 00:33:22.240 "ddgst": false 00:33:22.240 }, 00:33:22.240 "method": "bdev_nvme_attach_controller" 00:33:22.240 }' 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:22.240 14:43:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:22.240 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:22.240 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:22.240 fio-3.35 00:33:22.240 Starting 2 threads 00:33:32.224 00:33:32.224 filename0: (groupid=0, jobs=1): err= 0: pid=1719640: Sun Nov 17 14:43:21 2024 00:33:32.224 read: IOPS=192, BW=770KiB/s (789kB/s)(7712KiB/10010msec) 00:33:32.224 slat (nsec): min=5952, max=61021, avg=7989.59, stdev=3647.20 00:33:32.224 clat (usec): min=386, max=42573, avg=20743.26, stdev=20549.15 00:33:32.224 lat (usec): min=393, max=42581, avg=20751.25, stdev=20548.26 00:33:32.224 clat percentiles (usec): 00:33:32.224 | 1.00th=[ 400], 5.00th=[ 412], 10.00th=[ 416], 20.00th=[ 424], 00:33:32.224 | 30.00th=[ 433], 40.00th=[ 453], 50.00th=[ 668], 60.00th=[41157], 00:33:32.224 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:33:32.224 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:32.224 | 99.99th=[42730] 00:33:32.224 bw ( KiB/s): min= 672, max= 832, per=65.30%, avg=769.60, stdev=38.11, samples=20 00:33:32.224 iops : min= 168, max= 208, avg=192.40, stdev= 9.53, samples=20 00:33:32.224 lat (usec) : 500=42.53%, 750=7.88%, 1000=0.21% 00:33:32.224 lat (msec) : 50=49.38% 00:33:32.224 cpu : usr=97.15%, sys=2.58%, ctx=15, majf=0, minf=10 00:33:32.224 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:32.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.224 issued rwts: total=1928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:32.224 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:32.224 filename1: (groupid=0, jobs=1): err= 0: pid=1719641: Sun Nov 17 14:43:21 2024 00:33:32.224 read: IOPS=102, BW=408KiB/s (418kB/s)(4096KiB/10027msec) 00:33:32.224 slat (nsec): min=5986, max=57005, avg=9743.93, stdev=6905.11 00:33:32.224 clat (usec): min=419, max=43487, avg=39135.83, stdev=9314.62 00:33:32.224 lat (usec): min=425, max=43513, avg=39145.57, stdev=9314.12 00:33:32.224 clat percentiles (usec): 00:33:32.224 | 1.00th=[ 433], 5.00th=[ 685], 10.00th=[40633], 20.00th=[41157], 00:33:32.224 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:32.224 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:32.224 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:33:32.224 | 99.99th=[43254] 00:33:32.224 bw ( KiB/s): min= 384, max= 480, per=34.56%, avg=408.00, stdev=29.13, samples=20 00:33:32.224 iops : min= 96, max= 120, avg=102.00, stdev= 7.28, samples=20 00:33:32.224 lat (usec) : 500=4.39%, 750=1.07% 00:33:32.224 lat (msec) : 50=94.53% 00:33:32.224 cpu : usr=97.89%, sys=1.85%, ctx=9, majf=0, minf=9 00:33:32.224 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:32.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.224 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:32.224 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:32.224 00:33:32.224 Run status group 0 (all jobs): 00:33:32.224 READ: bw=1178KiB/s (1206kB/s), 408KiB/s-770KiB/s (418kB/s-789kB/s), io=11.5MiB (12.1MB), run=10010-10027msec 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.484 00:33:32.484 real 0m11.395s 00:33:32.484 user 0m26.831s 00:33:32.484 sys 0m0.753s 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:32.484 14:43:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:32.484 ************************************ 00:33:32.484 END TEST fio_dif_1_multi_subsystems 00:33:32.484 ************************************ 00:33:32.484 14:43:21 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:32.484 14:43:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:32.484 14:43:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:32.484 14:43:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:32.484 ************************************ 00:33:32.484 START TEST fio_dif_rand_params 00:33:32.484 ************************************ 00:33:32.484 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:32.484 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:32.484 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:32.484 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:32.484 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:32.484 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:32.484 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:32.484 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:32.484 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:32.484 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:32.484 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:32.484 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:32.484 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.485 bdev_null0 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.485 [2024-11-17 14:43:21.637335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:32.485 { 00:33:32.485 "params": { 00:33:32.485 "name": "Nvme$subsystem", 00:33:32.485 "trtype": "$TEST_TRANSPORT", 00:33:32.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:32.485 "adrfam": "ipv4", 00:33:32.485 "trsvcid": "$NVMF_PORT", 00:33:32.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:32.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:32.485 "hdgst": ${hdgst:-false}, 00:33:32.485 "ddgst": ${ddgst:-false} 00:33:32.485 }, 00:33:32.485 "method": "bdev_nvme_attach_controller" 00:33:32.485 } 00:33:32.485 EOF 00:33:32.485 )") 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:32.485 "params": { 00:33:32.485 "name": "Nvme0", 00:33:32.485 "trtype": "tcp", 00:33:32.485 "traddr": "10.0.0.2", 00:33:32.485 "adrfam": "ipv4", 00:33:32.485 "trsvcid": "4420", 00:33:32.485 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:32.485 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:32.485 "hdgst": false, 00:33:32.485 "ddgst": false 00:33:32.485 }, 00:33:32.485 "method": "bdev_nvme_attach_controller" 00:33:32.485 }' 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:32.485 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:32.766 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:32.766 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:32.766 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:32.766 14:43:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:33.025 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:33.025 ... 00:33:33.025 fio-3.35 00:33:33.025 Starting 3 threads 00:33:39.592 00:33:39.592 filename0: (groupid=0, jobs=1): err= 0: pid=1721516: Sun Nov 17 14:43:27 2024 00:33:39.592 read: IOPS=326, BW=40.8MiB/s (42.8MB/s)(206MiB/5048msec) 00:33:39.592 slat (nsec): min=6181, max=24391, avg=10550.09, stdev=1930.09 00:33:39.592 clat (usec): min=4797, max=52147, avg=9156.94, stdev=5662.36 00:33:39.592 lat (usec): min=4809, max=52159, avg=9167.49, stdev=5662.35 00:33:39.592 clat percentiles (usec): 00:33:39.592 | 1.00th=[ 5538], 5.00th=[ 6194], 10.00th=[ 6587], 20.00th=[ 7439], 00:33:39.592 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8717], 00:33:39.592 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[10028], 95.00th=[10552], 00:33:39.592 | 99.00th=[47973], 99.50th=[49546], 99.90th=[51643], 99.95th=[52167], 00:33:39.592 | 99.99th=[52167] 00:33:39.592 bw ( KiB/s): min=28160, max=47872, per=35.64%, avg=42086.40, stdev=6528.08, samples=10 00:33:39.592 iops : min= 220, max= 374, avg=328.80, stdev=51.00, samples=10 00:33:39.592 lat (msec) : 10=89.19%, 20=8.86%, 50=1.58%, 100=0.36% 00:33:39.592 cpu : usr=94.69%, sys=5.01%, ctx=15, majf=0, minf=48 00:33:39.592 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:39.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.592 issued rwts: total=1647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:39.592 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:39.592 filename0: (groupid=0, jobs=1): err= 0: pid=1721517: Sun Nov 17 14:43:27 2024 00:33:39.592 read: IOPS=308, BW=38.5MiB/s (40.4MB/s)(194MiB/5045msec) 00:33:39.592 slat (nsec): min=6209, max=24722, avg=10590.91, stdev=1923.91 00:33:39.592 clat (usec): min=3252, max=51177, avg=9693.75, stdev=5708.42 00:33:39.592 lat (usec): min=3258, max=51188, avg=9704.34, stdev=5708.60 00:33:39.592 clat percentiles (usec): 00:33:39.593 | 1.00th=[ 3752], 5.00th=[ 5866], 10.00th=[ 6325], 20.00th=[ 7504], 00:33:39.593 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9634], 00:33:39.593 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11338], 95.00th=[11863], 00:33:39.593 | 99.00th=[49546], 99.50th=[49546], 99.90th=[51119], 99.95th=[51119], 00:33:39.593 | 99.99th=[51119] 00:33:39.593 bw ( KiB/s): min=31744, max=53760, per=33.65%, avg=39731.20, stdev=6668.90, samples=10 00:33:39.593 iops : min= 248, max= 420, avg=310.40, stdev=52.10, samples=10 00:33:39.593 lat (msec) : 4=1.48%, 10=68.49%, 20=28.17%, 50=1.61%, 100=0.26% 00:33:39.593 cpu : usr=94.11%, sys=5.59%, ctx=10, majf=0, minf=73 00:33:39.593 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:39.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.593 issued rwts: total=1555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:39.593 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:39.593 filename0: (groupid=0, jobs=1): err= 0: pid=1721518: Sun Nov 17 14:43:27 2024 00:33:39.593 read: IOPS=288, BW=36.0MiB/s (37.8MB/s)(182MiB/5045msec) 00:33:39.593 slat (nsec): min=6194, max=25693, avg=10646.91, stdev=1892.34 00:33:39.593 clat (usec): min=4778, max=89918, avg=10359.01, stdev=6723.67 00:33:39.593 lat (usec): min=4790, max=89927, avg=10369.66, stdev=6723.68 00:33:39.593 clat percentiles (usec): 00:33:39.593 | 1.00th=[ 5538], 5.00th=[ 6194], 10.00th=[ 6521], 20.00th=[ 8029], 00:33:39.593 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:33:39.593 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11731], 95.00th=[12387], 00:33:39.593 | 99.00th=[49546], 99.50th=[50594], 99.90th=[51643], 99.95th=[89654], 00:33:39.593 | 99.99th=[89654] 00:33:39.593 bw ( KiB/s): min=33792, max=41728, per=31.50%, avg=37196.80, stdev=2775.76, samples=10 00:33:39.593 iops : min= 264, max= 326, avg=290.60, stdev=21.69, samples=10 00:33:39.593 lat (msec) : 10=61.79%, 20=35.67%, 50=1.65%, 100=0.89% 00:33:39.593 cpu : usr=94.98%, sys=4.72%, ctx=12, majf=0, minf=19 00:33:39.593 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:39.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.593 issued rwts: total=1455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:39.593 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:39.593 00:33:39.593 Run status group 0 (all jobs): 00:33:39.593 READ: bw=115MiB/s (121MB/s), 36.0MiB/s-40.8MiB/s (37.8MB/s-42.8MB/s), io=582MiB (610MB), run=5045-5048msec 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.593 bdev_null0 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.593 [2024-11-17 14:43:27.743810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.593 bdev_null1 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.593 bdev_null2 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:39.593 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:39.594 { 00:33:39.594 "params": { 00:33:39.594 "name": "Nvme$subsystem", 00:33:39.594 "trtype": "$TEST_TRANSPORT", 00:33:39.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:39.594 "adrfam": "ipv4", 00:33:39.594 "trsvcid": "$NVMF_PORT", 00:33:39.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:39.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:39.594 "hdgst": ${hdgst:-false}, 00:33:39.594 "ddgst": ${ddgst:-false} 00:33:39.594 }, 00:33:39.594 "method": "bdev_nvme_attach_controller" 00:33:39.594 } 00:33:39.594 EOF 00:33:39.594 )") 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:39.594 { 00:33:39.594 "params": { 00:33:39.594 "name": "Nvme$subsystem", 00:33:39.594 "trtype": "$TEST_TRANSPORT", 00:33:39.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:39.594 "adrfam": "ipv4", 00:33:39.594 "trsvcid": "$NVMF_PORT", 00:33:39.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:39.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:39.594 "hdgst": ${hdgst:-false}, 00:33:39.594 "ddgst": ${ddgst:-false} 00:33:39.594 }, 00:33:39.594 "method": "bdev_nvme_attach_controller" 00:33:39.594 } 00:33:39.594 EOF 00:33:39.594 )") 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:39.594 { 00:33:39.594 "params": { 00:33:39.594 "name": "Nvme$subsystem", 00:33:39.594 "trtype": "$TEST_TRANSPORT", 00:33:39.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:39.594 "adrfam": "ipv4", 00:33:39.594 "trsvcid": "$NVMF_PORT", 00:33:39.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:39.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:39.594 "hdgst": ${hdgst:-false}, 00:33:39.594 "ddgst": ${ddgst:-false} 00:33:39.594 }, 00:33:39.594 "method": "bdev_nvme_attach_controller" 00:33:39.594 } 00:33:39.594 EOF 00:33:39.594 )") 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:39.594 "params": { 00:33:39.594 "name": "Nvme0", 00:33:39.594 "trtype": "tcp", 00:33:39.594 "traddr": "10.0.0.2", 00:33:39.594 "adrfam": "ipv4", 00:33:39.594 "trsvcid": "4420", 00:33:39.594 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:39.594 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:39.594 "hdgst": false, 00:33:39.594 "ddgst": false 00:33:39.594 }, 00:33:39.594 "method": "bdev_nvme_attach_controller" 00:33:39.594 },{ 00:33:39.594 "params": { 00:33:39.594 "name": "Nvme1", 00:33:39.594 "trtype": "tcp", 00:33:39.594 "traddr": "10.0.0.2", 00:33:39.594 "adrfam": "ipv4", 00:33:39.594 "trsvcid": "4420", 00:33:39.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:39.594 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:39.594 "hdgst": false, 00:33:39.594 "ddgst": false 00:33:39.594 }, 00:33:39.594 "method": "bdev_nvme_attach_controller" 00:33:39.594 },{ 00:33:39.594 "params": { 00:33:39.594 "name": "Nvme2", 00:33:39.594 "trtype": "tcp", 00:33:39.594 "traddr": "10.0.0.2", 00:33:39.594 "adrfam": "ipv4", 00:33:39.594 "trsvcid": "4420", 00:33:39.594 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:39.594 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:39.594 "hdgst": false, 00:33:39.594 "ddgst": false 00:33:39.594 }, 00:33:39.594 "method": "bdev_nvme_attach_controller" 00:33:39.594 }' 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:39.594 14:43:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:39.594 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:39.594 ... 00:33:39.594 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:39.594 ... 00:33:39.594 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:39.594 ... 00:33:39.594 fio-3.35 00:33:39.594 Starting 24 threads 00:33:51.796 00:33:51.796 filename0: (groupid=0, jobs=1): err= 0: pid=1722776: Sun Nov 17 14:43:39 2024 00:33:51.796 read: IOPS=567, BW=2270KiB/s (2324kB/s)(22.2MiB/10009msec) 00:33:51.796 slat (nsec): min=7208, max=43382, avg=12603.57, stdev=5547.44 00:33:51.796 clat (usec): min=6472, max=33849, avg=28086.56, stdev=1871.08 00:33:51.796 lat (usec): min=6485, max=33879, avg=28099.16, stdev=1870.70 00:33:51.796 clat percentiles (usec): 00:33:51.796 | 1.00th=[15008], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:33:51.796 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:51.796 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:33:51.796 | 99.00th=[29230], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:33:51.796 | 99.99th=[33817] 00:33:51.796 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2265.60, stdev=93.78, samples=20 00:33:51.796 iops : min= 544, max= 640, avg=566.40, stdev=23.45, samples=20 00:33:51.796 lat (msec) : 10=0.40%, 20=0.72%, 50=98.87% 00:33:51.796 cpu : usr=98.60%, sys=1.03%, ctx=13, majf=0, minf=11 00:33:51.796 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:51.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.796 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.796 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.796 filename0: (groupid=0, jobs=1): err= 0: pid=1722777: Sun Nov 17 14:43:39 2024 00:33:51.796 read: IOPS=564, BW=2257KiB/s (2312kB/s)(22.1MiB/10008msec) 00:33:51.796 slat (nsec): min=6564, max=46913, avg=15912.54, stdev=7118.26 00:33:51.796 clat (usec): min=14879, max=33328, avg=28219.48, stdev=812.27 00:33:51.796 lat (usec): min=14892, max=33343, avg=28235.39, stdev=811.76 00:33:51.796 clat percentiles (usec): 00:33:51.796 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:33:51.796 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:51.796 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:33:51.796 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29230], 99.95th=[29230], 00:33:51.796 | 99.99th=[33424] 00:33:51.796 bw ( KiB/s): min= 2176, max= 2308, per=4.16%, avg=2253.00, stdev=64.51, samples=20 00:33:51.796 iops : min= 544, max= 577, avg=563.25, stdev=16.13, samples=20 00:33:51.796 lat (msec) : 20=0.28%, 50=99.72% 00:33:51.796 cpu : usr=98.60%, sys=1.03%, ctx=5, majf=0, minf=9 00:33:51.796 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:51.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.796 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.796 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.796 filename0: (groupid=0, jobs=1): err= 0: pid=1722778: Sun Nov 17 14:43:39 2024 00:33:51.796 read: IOPS=563, BW=2256KiB/s (2310kB/s)(22.1MiB/10015msec) 00:33:51.796 slat (nsec): min=5982, max=85386, avg=34934.93, stdev=17348.34 00:33:51.796 clat (usec): min=18811, max=29411, avg=28027.32, stdev=556.07 00:33:51.796 lat (usec): min=18836, max=29462, avg=28062.26, stdev=559.00 00:33:51.796 clat percentiles (usec): 00:33:51.796 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:51.796 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:33:51.796 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:51.796 | 99.00th=[28967], 99.50th=[28967], 99.90th=[29230], 99.95th=[29492], 00:33:51.796 | 99.99th=[29492] 00:33:51.796 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2250.11, stdev=64.93, samples=19 00:33:51.796 iops : min= 544, max= 576, avg=562.53, stdev=16.23, samples=19 00:33:51.796 lat (msec) : 20=0.28%, 50=99.72% 00:33:51.796 cpu : usr=98.56%, sys=1.06%, ctx=13, majf=0, minf=9 00:33:51.796 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.796 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.796 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.796 filename0: (groupid=0, jobs=1): err= 0: pid=1722779: Sun Nov 17 14:43:39 2024 00:33:51.796 read: IOPS=565, BW=2262KiB/s (2316kB/s)(22.1MiB/10002msec) 00:33:51.796 slat (nsec): min=6796, max=77042, avg=15381.82, stdev=5966.48 00:33:51.796 clat (usec): min=11863, max=51329, avg=28171.00, stdev=2201.00 00:33:51.796 lat (usec): min=11879, max=51361, avg=28186.38, stdev=2201.79 00:33:51.796 clat percentiles (usec): 00:33:51.796 | 1.00th=[18220], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:33:51.796 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:51.796 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28705], 95.00th=[28705], 00:33:51.796 | 99.00th=[32375], 99.50th=[40109], 99.90th=[51119], 99.95th=[51119], 00:33:51.796 | 99.99th=[51119] 00:33:51.796 bw ( KiB/s): min= 2048, max= 2352, per=4.16%, avg=2253.47, stdev=77.70, samples=19 00:33:51.796 iops : min= 512, max= 588, avg=563.37, stdev=19.43, samples=19 00:33:51.796 lat (msec) : 20=1.38%, 50=98.34%, 100=0.28% 00:33:51.796 cpu : usr=98.50%, sys=1.12%, ctx=13, majf=0, minf=9 00:33:51.796 IO depths : 1=5.5%, 2=11.2%, 4=23.1%, 8=52.9%, 16=7.4%, 32=0.0%, >=64=0.0% 00:33:51.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.796 complete : 0=0.0%, 4=93.7%, 8=0.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.796 issued rwts: total=5656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.796 filename0: (groupid=0, jobs=1): err= 0: pid=1722780: Sun Nov 17 14:43:39 2024 00:33:51.796 read: IOPS=562, BW=2251KiB/s (2305kB/s)(22.0MiB/10006msec) 00:33:51.796 slat (nsec): min=4504, max=46589, avg=23273.79, stdev=6863.05 00:33:51.796 clat (usec): min=19956, max=45076, avg=28222.12, stdev=1142.15 00:33:51.796 lat (usec): min=19974, max=45088, avg=28245.39, stdev=1141.60 00:33:51.796 clat percentiles (usec): 00:33:51.796 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:33:51.796 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:51.796 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28705], 95.00th=[28705], 00:33:51.796 | 99.00th=[29230], 99.50th=[33817], 99.90th=[44827], 99.95th=[44827], 00:33:51.796 | 99.99th=[44827] 00:33:51.796 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2249.26, stdev=64.29, samples=19 00:33:51.796 iops : min= 544, max= 576, avg=562.32, stdev=16.07, samples=19 00:33:51.796 lat (msec) : 20=0.05%, 50=99.95% 00:33:51.796 cpu : usr=98.47%, sys=1.15%, ctx=13, majf=0, minf=9 00:33:51.796 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:33:51.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.796 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.797 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.797 filename0: (groupid=0, jobs=1): err= 0: pid=1722781: Sun Nov 17 14:43:39 2024 00:33:51.797 read: IOPS=563, BW=2252KiB/s (2306kB/s)(22.0MiB/10003msec) 00:33:51.797 slat (nsec): min=4780, max=48846, avg=23087.23, stdev=7294.76 00:33:51.797 clat (usec): min=20010, max=42356, avg=28207.24, stdev=1139.64 00:33:51.797 lat (usec): min=20034, max=42368, avg=28230.33, stdev=1139.69 00:33:51.797 clat percentiles (usec): 00:33:51.797 | 1.00th=[23987], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:33:51.797 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:51.797 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28705], 95.00th=[28967], 00:33:51.797 | 99.00th=[32900], 99.50th=[33424], 99.90th=[42206], 99.95th=[42206], 00:33:51.797 | 99.99th=[42206] 00:33:51.797 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2250.11, stdev=63.38, samples=19 00:33:51.797 iops : min= 544, max= 576, avg=562.53, stdev=15.84, samples=19 00:33:51.797 lat (msec) : 50=100.00% 00:33:51.797 cpu : usr=98.58%, sys=1.04%, ctx=13, majf=0, minf=9 00:33:51.797 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:33:51.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.797 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.797 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.797 filename0: (groupid=0, jobs=1): err= 0: pid=1722782: Sun Nov 17 14:43:39 2024 00:33:51.797 read: IOPS=565, BW=2262KiB/s (2316kB/s)(22.1MiB/10015msec) 00:33:51.797 slat (nsec): min=4868, max=88251, avg=38762.38, stdev=18315.22 00:33:51.797 clat (usec): min=9120, max=29412, avg=27919.65, stdev=1258.04 00:33:51.797 lat (usec): min=9128, max=29426, avg=27958.42, stdev=1260.80 00:33:51.797 clat percentiles (usec): 00:33:51.797 | 1.00th=[27132], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:51.797 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:33:51.797 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:51.797 | 99.00th=[28967], 99.50th=[28967], 99.90th=[29230], 99.95th=[29492], 00:33:51.797 | 99.99th=[29492] 00:33:51.797 bw ( KiB/s): min= 2176, max= 2436, per=4.17%, avg=2259.40, stdev=75.64, samples=20 00:33:51.797 iops : min= 544, max= 609, avg=564.85, stdev=18.91, samples=20 00:33:51.797 lat (msec) : 10=0.04%, 20=0.78%, 50=99.19% 00:33:51.797 cpu : usr=98.58%, sys=1.04%, ctx=15, majf=0, minf=9 00:33:51.797 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:51.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.797 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.797 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.797 filename0: (groupid=0, jobs=1): err= 0: pid=1722783: Sun Nov 17 14:43:39 2024 00:33:51.797 read: IOPS=564, BW=2257KiB/s (2312kB/s)(22.1MiB/10008msec) 00:33:51.797 slat (nsec): min=9313, max=84562, avg=34863.65, stdev=17603.58 00:33:51.797 clat (usec): min=14633, max=29445, avg=28009.40, stdev=778.62 00:33:51.797 lat (usec): min=14642, max=29470, avg=28044.27, stdev=781.19 00:33:51.797 clat percentiles (usec): 00:33:51.797 | 1.00th=[27657], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:51.797 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:33:51.797 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:51.797 | 99.00th=[28967], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:33:51.797 | 99.99th=[29492] 00:33:51.797 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2252.80, stdev=64.34, samples=20 00:33:51.797 iops : min= 544, max= 576, avg=563.20, stdev=16.08, samples=20 00:33:51.797 lat (msec) : 20=0.28%, 50=99.72% 00:33:51.797 cpu : usr=98.52%, sys=1.10%, ctx=7, majf=0, minf=9 00:33:51.797 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.797 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.797 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.797 filename1: (groupid=0, jobs=1): err= 0: pid=1722784: Sun Nov 17 14:43:39 2024 00:33:51.797 read: IOPS=563, BW=2252KiB/s (2306kB/s)(22.0MiB/10003msec) 00:33:51.797 slat (nsec): min=4435, max=88040, avg=35725.63, stdev=18946.42 00:33:51.797 clat (usec): min=18813, max=43416, avg=28050.26, stdev=988.11 00:33:51.797 lat (usec): min=18827, max=43430, avg=28085.98, stdev=988.96 00:33:51.797 clat percentiles (usec): 00:33:51.797 | 1.00th=[27657], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:51.797 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:33:51.797 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:51.797 | 99.00th=[28967], 99.50th=[29230], 99.90th=[43254], 99.95th=[43254], 00:33:51.797 | 99.99th=[43254] 00:33:51.797 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2250.32, stdev=64.68, samples=19 00:33:51.797 iops : min= 544, max= 576, avg=562.58, stdev=16.17, samples=19 00:33:51.797 lat (msec) : 20=0.28%, 50=99.72% 00:33:51.797 cpu : usr=98.60%, sys=1.03%, ctx=13, majf=0, minf=9 00:33:51.797 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.797 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.797 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.797 filename1: (groupid=0, jobs=1): err= 0: pid=1722785: Sun Nov 17 14:43:39 2024 00:33:51.797 read: IOPS=567, BW=2269KiB/s (2323kB/s)(22.2MiB/10015msec) 00:33:51.797 slat (nsec): min=7476, max=86784, avg=36878.28, stdev=18398.78 00:33:51.797 clat (usec): min=8881, max=30133, avg=27848.38, stdev=1761.06 00:33:51.797 lat (usec): min=8890, max=30145, avg=27885.26, stdev=1764.09 00:33:51.797 clat percentiles (usec): 00:33:51.797 | 1.00th=[14877], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:51.797 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:33:51.797 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:51.797 | 99.00th=[28967], 99.50th=[28967], 99.90th=[29492], 99.95th=[29492], 00:33:51.797 | 99.99th=[30016] 00:33:51.797 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2265.60, stdev=93.78, samples=20 00:33:51.797 iops : min= 544, max= 640, avg=566.40, stdev=23.45, samples=20 00:33:51.797 lat (msec) : 10=0.37%, 20=0.76%, 50=98.87% 00:33:51.797 cpu : usr=98.51%, sys=1.12%, ctx=13, majf=0, minf=9 00:33:51.797 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:51.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.797 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.797 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.797 filename1: (groupid=0, jobs=1): err= 0: pid=1722786: Sun Nov 17 14:43:39 2024 00:33:51.797 read: IOPS=587, BW=2352KiB/s (2408kB/s)(23.0MiB/10002msec) 00:33:51.797 slat (nsec): min=6773, max=88894, avg=18010.42, stdev=11976.43 00:33:51.797 clat (usec): min=3940, max=66778, avg=27103.22, stdev=4141.54 00:33:51.797 lat (usec): min=3948, max=66804, avg=27121.23, stdev=4142.86 00:33:51.797 clat percentiles (usec): 00:33:51.797 | 1.00th=[16712], 5.00th=[17433], 10.00th=[20579], 20.00th=[26346], 00:33:51.797 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:51.797 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28967], 00:33:51.797 | 99.00th=[40109], 99.50th=[40109], 99.90th=[51643], 99.95th=[51643], 00:33:51.797 | 99.99th=[66847] 00:33:51.797 bw ( KiB/s): min= 2048, max= 2688, per=4.33%, avg=2347.79, stdev=176.95, samples=19 00:33:51.797 iops : min= 512, max= 672, avg=586.95, stdev=44.24, samples=19 00:33:51.797 lat (msec) : 4=0.03%, 20=9.18%, 50=90.51%, 100=0.27% 00:33:51.797 cpu : usr=98.63%, sys=0.98%, ctx=23, majf=0, minf=9 00:33:51.797 IO depths : 1=0.1%, 2=4.1%, 4=17.7%, 8=65.0%, 16=13.2%, 32=0.0%, >=64=0.0% 00:33:51.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.797 complete : 0=0.0%, 4=92.4%, 8=2.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.797 issued rwts: total=5880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.797 filename1: (groupid=0, jobs=1): err= 0: pid=1722787: Sun Nov 17 14:43:39 2024 00:33:51.797 read: IOPS=566, BW=2267KiB/s (2321kB/s)(22.2MiB/10023msec) 00:33:51.797 slat (nsec): min=6856, max=84288, avg=32066.59, stdev=19208.81 00:33:51.797 clat (usec): min=8542, max=38743, avg=27971.51, stdev=2419.14 00:33:51.797 lat (usec): min=8549, max=38765, avg=28003.58, stdev=2420.64 00:33:51.797 clat percentiles (usec): 00:33:51.797 | 1.00th=[14746], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:51.797 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:51.797 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:33:51.797 | 99.00th=[38011], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 00:33:51.797 | 99.99th=[38536] 00:33:51.797 bw ( KiB/s): min= 2176, max= 2544, per=4.18%, avg=2265.60, stdev=85.05, samples=20 00:33:51.797 iops : min= 544, max= 636, avg=566.40, stdev=21.26, samples=20 00:33:51.797 lat (msec) : 10=0.42%, 20=2.04%, 50=97.54% 00:33:51.797 cpu : usr=98.38%, sys=1.25%, ctx=17, majf=0, minf=9 00:33:51.797 IO depths : 1=1.4%, 2=7.5%, 4=24.7%, 8=55.2%, 16=11.1%, 32=0.0%, >=64=0.0% 00:33:51.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.797 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.797 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.798 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.798 filename1: (groupid=0, jobs=1): err= 0: pid=1722788: Sun Nov 17 14:43:39 2024 00:33:51.798 read: IOPS=562, BW=2251KiB/s (2305kB/s)(22.0MiB/10008msec) 00:33:51.798 slat (nsec): min=7247, max=52216, avg=23219.07, stdev=6519.46 00:33:51.798 clat (usec): min=20218, max=48438, avg=28217.87, stdev=941.47 00:33:51.798 lat (usec): min=20235, max=48450, avg=28241.09, stdev=940.91 00:33:51.798 clat percentiles (usec): 00:33:51.798 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:33:51.798 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:51.798 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28705], 95.00th=[28705], 00:33:51.798 | 99.00th=[29230], 99.50th=[29230], 99.90th=[42206], 99.95th=[42206], 00:33:51.798 | 99.99th=[48497] 00:33:51.798 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2250.11, stdev=64.93, samples=19 00:33:51.798 iops : min= 544, max= 576, avg=562.53, stdev=16.23, samples=19 00:33:51.798 lat (msec) : 50=100.00% 00:33:51.798 cpu : usr=98.54%, sys=1.07%, ctx=14, majf=0, minf=9 00:33:51.798 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:51.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.798 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.798 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.798 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.798 filename1: (groupid=0, jobs=1): err= 0: pid=1722789: Sun Nov 17 14:43:39 2024 00:33:51.798 read: IOPS=562, BW=2252KiB/s (2306kB/s)(22.0MiB/10004msec) 00:33:51.798 slat (nsec): min=6340, max=87311, avg=35067.62, stdev=19213.58 00:33:51.798 clat (usec): min=18818, max=44450, avg=28054.83, stdev=1033.11 00:33:51.798 lat (usec): min=18826, max=44468, avg=28089.90, stdev=1034.08 00:33:51.798 clat percentiles (usec): 00:33:51.798 | 1.00th=[27657], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:51.798 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:33:51.798 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:51.798 | 99.00th=[28967], 99.50th=[29230], 99.90th=[44303], 99.95th=[44303], 00:33:51.798 | 99.99th=[44303] 00:33:51.798 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2250.11, stdev=64.93, samples=19 00:33:51.798 iops : min= 544, max= 576, avg=562.53, stdev=16.23, samples=19 00:33:51.798 lat (msec) : 20=0.28%, 50=99.72% 00:33:51.798 cpu : usr=98.56%, sys=1.07%, ctx=13, majf=0, minf=9 00:33:51.798 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.798 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.798 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.798 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.798 filename1: (groupid=0, jobs=1): err= 0: pid=1722790: Sun Nov 17 14:43:39 2024 00:33:51.798 read: IOPS=564, BW=2256KiB/s (2310kB/s)(22.1MiB/10014msec) 00:33:51.798 slat (nsec): min=5985, max=44912, avg=22143.07, stdev=7093.77 00:33:51.798 clat (usec): min=19920, max=29634, avg=28186.33, stdev=574.90 00:33:51.798 lat (usec): min=19934, max=29652, avg=28208.48, stdev=574.28 00:33:51.798 clat percentiles (usec): 00:33:51.798 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:33:51.798 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:51.798 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:33:51.798 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:33:51.798 | 99.99th=[29754] 00:33:51.798 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2252.80, stdev=64.34, samples=20 00:33:51.798 iops : min= 544, max= 576, avg=563.20, stdev=16.08, samples=20 00:33:51.798 lat (msec) : 20=0.07%, 50=99.93% 00:33:51.798 cpu : usr=98.48%, sys=1.15%, ctx=7, majf=0, minf=9 00:33:51.798 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.798 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.798 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.798 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.798 filename1: (groupid=0, jobs=1): err= 0: pid=1722791: Sun Nov 17 14:43:39 2024 00:33:51.798 read: IOPS=563, BW=2253KiB/s (2307kB/s)(22.0MiB/10001msec) 00:33:51.798 slat (nsec): min=5822, max=47278, avg=22540.34, stdev=7256.84 00:33:51.798 clat (usec): min=19988, max=40161, avg=28198.97, stdev=809.35 00:33:51.798 lat (usec): min=20017, max=40176, avg=28221.51, stdev=809.21 00:33:51.798 clat percentiles (usec): 00:33:51.798 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:33:51.798 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:51.798 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28705], 95.00th=[28705], 00:33:51.798 | 99.00th=[28967], 99.50th=[29230], 99.90th=[40109], 99.95th=[40109], 00:33:51.798 | 99.99th=[40109] 00:33:51.798 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2250.11, stdev=64.93, samples=19 00:33:51.798 iops : min= 544, max= 576, avg=562.53, stdev=16.23, samples=19 00:33:51.798 lat (msec) : 20=0.02%, 50=99.98% 00:33:51.798 cpu : usr=98.53%, sys=1.09%, ctx=11, majf=0, minf=9 00:33:51.798 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.798 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.798 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.798 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.798 filename2: (groupid=0, jobs=1): err= 0: pid=1722792: Sun Nov 17 14:43:39 2024 00:33:51.798 read: IOPS=563, BW=2256KiB/s (2310kB/s)(22.1MiB/10016msec) 00:33:51.798 slat (nsec): min=6664, max=87365, avg=37026.12, stdev=18545.65 00:33:51.798 clat (usec): min=18386, max=38052, avg=28001.87, stdev=620.79 00:33:51.798 lat (usec): min=18405, max=38070, avg=28038.90, stdev=623.95 00:33:51.798 clat percentiles (usec): 00:33:51.798 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:51.798 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:33:51.798 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:51.798 | 99.00th=[28967], 99.50th=[28967], 99.90th=[29492], 99.95th=[29492], 00:33:51.798 | 99.99th=[38011] 00:33:51.798 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2250.11, stdev=64.93, samples=19 00:33:51.798 iops : min= 544, max= 576, avg=562.53, stdev=16.23, samples=19 00:33:51.798 lat (msec) : 20=0.32%, 50=99.68% 00:33:51.798 cpu : usr=98.48%, sys=1.14%, ctx=12, majf=0, minf=9 00:33:51.798 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:51.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.798 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.798 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.798 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.798 filename2: (groupid=0, jobs=1): err= 0: pid=1722793: Sun Nov 17 14:43:39 2024 00:33:51.798 read: IOPS=565, BW=2261KiB/s (2315kB/s)(22.1MiB/10022msec) 00:33:51.798 slat (nsec): min=6803, max=33571, avg=9877.50, stdev=3105.09 00:33:51.798 clat (usec): min=12390, max=43946, avg=28224.94, stdev=1940.07 00:33:51.798 lat (usec): min=12398, max=43954, avg=28234.82, stdev=1940.10 00:33:51.798 clat percentiles (usec): 00:33:51.798 | 1.00th=[16909], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:33:51.798 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:51.798 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:33:51.798 | 99.00th=[29230], 99.50th=[40633], 99.90th=[43779], 99.95th=[43779], 00:33:51.798 | 99.99th=[43779] 00:33:51.798 bw ( KiB/s): min= 2176, max= 2308, per=4.17%, avg=2259.40, stdev=61.27, samples=20 00:33:51.798 iops : min= 544, max= 577, avg=564.85, stdev=15.32, samples=20 00:33:51.798 lat (msec) : 20=1.24%, 50=98.76% 00:33:51.798 cpu : usr=98.48%, sys=1.15%, ctx=13, majf=0, minf=11 00:33:51.798 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:33:51.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.798 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.798 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.798 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.798 filename2: (groupid=0, jobs=1): err= 0: pid=1722794: Sun Nov 17 14:43:39 2024 00:33:51.798 read: IOPS=567, BW=2269KiB/s (2323kB/s)(22.2MiB/10015msec) 00:33:51.798 slat (nsec): min=6561, max=85062, avg=32109.16, stdev=18615.91 00:33:51.798 clat (usec): min=9289, max=38076, avg=27927.64, stdev=1915.81 00:33:51.798 lat (usec): min=9296, max=38106, avg=27959.75, stdev=1916.86 00:33:51.798 clat percentiles (usec): 00:33:51.798 | 1.00th=[14746], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:51.798 | 30.00th=[27919], 40.00th=[27919], 50.00th=[28181], 60.00th=[28181], 00:33:51.798 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:33:51.798 | 99.00th=[29230], 99.50th=[29492], 99.90th=[38011], 99.95th=[38011], 00:33:51.798 | 99.99th=[38011] 00:33:51.798 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2265.60, stdev=93.78, samples=20 00:33:51.798 iops : min= 544, max= 640, avg=566.40, stdev=23.45, samples=20 00:33:51.798 lat (msec) : 10=0.28%, 20=1.09%, 50=98.63% 00:33:51.798 cpu : usr=98.49%, sys=1.13%, ctx=14, majf=0, minf=9 00:33:51.798 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:51.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.798 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.798 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.798 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.798 filename2: (groupid=0, jobs=1): err= 0: pid=1722795: Sun Nov 17 14:43:39 2024 00:33:51.798 read: IOPS=563, BW=2252KiB/s (2306kB/s)(22.0MiB/10003msec) 00:33:51.798 slat (nsec): min=4311, max=48550, avg=23243.46, stdev=7123.59 00:33:51.798 clat (usec): min=19955, max=47715, avg=28202.24, stdev=917.65 00:33:51.798 lat (usec): min=19969, max=47728, avg=28225.49, stdev=917.21 00:33:51.798 clat percentiles (usec): 00:33:51.799 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:33:51.799 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:51.799 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28705], 95.00th=[28705], 00:33:51.799 | 99.00th=[28967], 99.50th=[29230], 99.90th=[41681], 99.95th=[41681], 00:33:51.799 | 99.99th=[47973] 00:33:51.799 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2250.32, stdev=64.68, samples=19 00:33:51.799 iops : min= 544, max= 576, avg=562.58, stdev=16.17, samples=19 00:33:51.799 lat (msec) : 20=0.05%, 50=99.95% 00:33:51.799 cpu : usr=98.47%, sys=1.12%, ctx=11, majf=0, minf=9 00:33:51.799 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:51.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.799 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.799 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.799 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.799 filename2: (groupid=0, jobs=1): err= 0: pid=1722796: Sun Nov 17 14:43:39 2024 00:33:51.799 read: IOPS=563, BW=2252KiB/s (2306kB/s)(22.0MiB/10003msec) 00:33:51.799 slat (nsec): min=4155, max=48714, avg=22907.63, stdev=7412.10 00:33:51.799 clat (usec): min=20002, max=42489, avg=28228.10, stdev=923.01 00:33:51.799 lat (usec): min=20034, max=42501, avg=28251.01, stdev=922.05 00:33:51.799 clat percentiles (usec): 00:33:51.799 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:33:51.799 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:51.799 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28705], 95.00th=[28705], 00:33:51.799 | 99.00th=[28967], 99.50th=[29230], 99.90th=[42206], 99.95th=[42730], 00:33:51.799 | 99.99th=[42730] 00:33:51.799 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2250.11, stdev=64.93, samples=19 00:33:51.799 iops : min= 544, max= 576, avg=562.53, stdev=16.23, samples=19 00:33:51.799 lat (msec) : 50=100.00% 00:33:51.799 cpu : usr=98.70%, sys=0.92%, ctx=13, majf=0, minf=9 00:33:51.799 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:51.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.799 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.799 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.799 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.799 filename2: (groupid=0, jobs=1): err= 0: pid=1722797: Sun Nov 17 14:43:39 2024 00:33:51.799 read: IOPS=567, BW=2269KiB/s (2323kB/s)(22.2MiB/10015msec) 00:33:51.799 slat (nsec): min=6818, max=71347, avg=28828.75, stdev=13006.66 00:33:51.799 clat (usec): min=9713, max=38584, avg=27981.73, stdev=1815.69 00:33:51.799 lat (usec): min=9720, max=38592, avg=28010.56, stdev=1815.90 00:33:51.799 clat percentiles (usec): 00:33:51.799 | 1.00th=[14746], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:33:51.799 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:51.799 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:33:51.799 | 99.00th=[28967], 99.50th=[28967], 99.90th=[29230], 99.95th=[29492], 00:33:51.799 | 99.99th=[38536] 00:33:51.799 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2265.60, stdev=93.78, samples=20 00:33:51.799 iops : min= 544, max= 640, avg=566.40, stdev=23.45, samples=20 00:33:51.799 lat (msec) : 10=0.28%, 20=0.88%, 50=98.84% 00:33:51.799 cpu : usr=98.65%, sys=1.00%, ctx=27, majf=0, minf=9 00:33:51.799 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:51.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.799 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.799 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.799 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.799 filename2: (groupid=0, jobs=1): err= 0: pid=1722798: Sun Nov 17 14:43:39 2024 00:33:51.799 read: IOPS=563, BW=2252KiB/s (2306kB/s)(22.0MiB/10002msec) 00:33:51.799 slat (nsec): min=6792, max=46003, avg=17119.14, stdev=5339.41 00:33:51.799 clat (usec): min=11705, max=51678, avg=28256.38, stdev=1539.16 00:33:51.799 lat (usec): min=11712, max=51705, avg=28273.49, stdev=1539.25 00:33:51.799 clat percentiles (usec): 00:33:51.799 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:33:51.799 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:51.799 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28705], 95.00th=[28705], 00:33:51.799 | 99.00th=[28967], 99.50th=[29230], 99.90th=[51643], 99.95th=[51643], 00:33:51.799 | 99.99th=[51643] 00:33:51.799 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2243.37, stdev=78.31, samples=19 00:33:51.799 iops : min= 512, max= 576, avg=560.84, stdev=19.58, samples=19 00:33:51.799 lat (msec) : 20=0.28%, 50=99.43%, 100=0.28% 00:33:51.799 cpu : usr=98.43%, sys=1.19%, ctx=15, majf=0, minf=9 00:33:51.799 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.799 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.799 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.799 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.799 filename2: (groupid=0, jobs=1): err= 0: pid=1722799: Sun Nov 17 14:43:39 2024 00:33:51.799 read: IOPS=563, BW=2252KiB/s (2306kB/s)(22.0MiB/10002msec) 00:33:51.799 slat (nsec): min=6889, max=46610, avg=17711.22, stdev=5316.86 00:33:51.799 clat (usec): min=11770, max=51212, avg=28256.74, stdev=1928.63 00:33:51.799 lat (usec): min=11785, max=51258, avg=28274.46, stdev=1929.22 00:33:51.799 clat percentiles (usec): 00:33:51.799 | 1.00th=[25822], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:33:51.799 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:33:51.799 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28705], 95.00th=[28705], 00:33:51.799 | 99.00th=[29230], 99.50th=[41157], 99.90th=[51119], 99.95th=[51119], 00:33:51.799 | 99.99th=[51119] 00:33:51.799 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2243.37, stdev=77.03, samples=19 00:33:51.799 iops : min= 512, max= 576, avg=560.84, stdev=19.26, samples=19 00:33:51.799 lat (msec) : 20=0.75%, 50=98.97%, 100=0.28% 00:33:51.799 cpu : usr=98.65%, sys=0.98%, ctx=13, majf=0, minf=9 00:33:51.799 IO depths : 1=5.6%, 2=11.8%, 4=24.9%, 8=50.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:33:51.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.799 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.799 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.799 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.799 00:33:51.799 Run status group 0 (all jobs): 00:33:51.799 READ: bw=52.9MiB/s (55.5MB/s), 2251KiB/s-2352KiB/s (2305kB/s-2408kB/s), io=531MiB (556MB), run=10001-10023msec 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:51.799 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.800 bdev_null0 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.800 [2024-11-17 14:43:39.488449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.800 bdev_null1 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:51.800 { 00:33:51.800 "params": { 00:33:51.800 "name": "Nvme$subsystem", 00:33:51.800 "trtype": "$TEST_TRANSPORT", 00:33:51.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:51.800 "adrfam": "ipv4", 00:33:51.800 "trsvcid": "$NVMF_PORT", 00:33:51.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:51.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:51.800 "hdgst": ${hdgst:-false}, 00:33:51.800 "ddgst": ${ddgst:-false} 00:33:51.800 }, 00:33:51.800 "method": "bdev_nvme_attach_controller" 00:33:51.800 } 00:33:51.800 EOF 00:33:51.800 )") 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:51.800 { 00:33:51.800 "params": { 00:33:51.800 "name": "Nvme$subsystem", 00:33:51.800 "trtype": "$TEST_TRANSPORT", 00:33:51.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:51.800 "adrfam": "ipv4", 00:33:51.800 "trsvcid": "$NVMF_PORT", 00:33:51.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:51.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:51.800 "hdgst": ${hdgst:-false}, 00:33:51.800 "ddgst": ${ddgst:-false} 00:33:51.800 }, 00:33:51.800 "method": "bdev_nvme_attach_controller" 00:33:51.800 } 00:33:51.800 EOF 00:33:51.800 )") 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:51.800 "params": { 00:33:51.800 "name": "Nvme0", 00:33:51.800 "trtype": "tcp", 00:33:51.800 "traddr": "10.0.0.2", 00:33:51.800 "adrfam": "ipv4", 00:33:51.800 "trsvcid": "4420", 00:33:51.800 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:51.800 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:51.800 "hdgst": false, 00:33:51.800 "ddgst": false 00:33:51.800 }, 00:33:51.800 "method": "bdev_nvme_attach_controller" 00:33:51.800 },{ 00:33:51.800 "params": { 00:33:51.800 "name": "Nvme1", 00:33:51.800 "trtype": "tcp", 00:33:51.800 "traddr": "10.0.0.2", 00:33:51.800 "adrfam": "ipv4", 00:33:51.800 "trsvcid": "4420", 00:33:51.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:51.800 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:51.800 "hdgst": false, 00:33:51.800 "ddgst": false 00:33:51.800 }, 00:33:51.800 "method": "bdev_nvme_attach_controller" 00:33:51.800 }' 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:51.800 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:51.801 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:51.801 14:43:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:51.801 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:51.801 ... 00:33:51.801 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:51.801 ... 00:33:51.801 fio-3.35 00:33:51.801 Starting 4 threads 00:33:57.090 00:33:57.090 filename0: (groupid=0, jobs=1): err= 0: pid=1724743: Sun Nov 17 14:43:45 2024 00:33:57.090 read: IOPS=2644, BW=20.7MiB/s (21.7MB/s)(103MiB/5002msec) 00:33:57.090 slat (nsec): min=6054, max=68766, avg=13026.68, stdev=9678.59 00:33:57.090 clat (usec): min=515, max=5730, avg=2982.92, stdev=402.94 00:33:57.090 lat (usec): min=527, max=5763, avg=2995.95, stdev=403.61 00:33:57.090 clat percentiles (usec): 00:33:57.090 | 1.00th=[ 1827], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2704], 00:33:57.090 | 30.00th=[ 2868], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3064], 00:33:57.090 | 70.00th=[ 3130], 80.00th=[ 3228], 90.00th=[ 3359], 95.00th=[ 3556], 00:33:57.090 | 99.00th=[ 4146], 99.50th=[ 4424], 99.90th=[ 4883], 99.95th=[ 5145], 00:33:57.090 | 99.99th=[ 5669] 00:33:57.090 bw ( KiB/s): min=20528, max=21680, per=25.96%, avg=21200.44, stdev=348.97, samples=9 00:33:57.090 iops : min= 2566, max= 2710, avg=2650.00, stdev=43.55, samples=9 00:33:57.090 lat (usec) : 750=0.02%, 1000=0.03% 00:33:57.090 lat (msec) : 2=1.55%, 4=96.86%, 10=1.55% 00:33:57.090 cpu : usr=96.86%, sys=2.82%, ctx=7, majf=0, minf=9 00:33:57.090 IO depths : 1=0.4%, 2=6.2%, 4=65.8%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.090 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.090 issued rwts: total=13229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.090 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:57.090 filename0: (groupid=0, jobs=1): err= 0: pid=1724744: Sun Nov 17 14:43:45 2024 00:33:57.090 read: IOPS=2531, BW=19.8MiB/s (20.7MB/s)(98.9MiB/5001msec) 00:33:57.090 slat (nsec): min=5963, max=68332, avg=12987.28, stdev=9758.95 00:33:57.090 clat (usec): min=797, max=5685, avg=3118.97, stdev=400.06 00:33:57.090 lat (usec): min=820, max=5696, avg=3131.96, stdev=399.99 00:33:57.090 clat percentiles (usec): 00:33:57.090 | 1.00th=[ 2024], 5.00th=[ 2507], 10.00th=[ 2704], 20.00th=[ 2933], 00:33:57.090 | 30.00th=[ 3032], 40.00th=[ 3064], 50.00th=[ 3064], 60.00th=[ 3130], 00:33:57.090 | 70.00th=[ 3228], 80.00th=[ 3326], 90.00th=[ 3556], 95.00th=[ 3785], 00:33:57.090 | 99.00th=[ 4424], 99.50th=[ 4686], 99.90th=[ 5145], 99.95th=[ 5342], 00:33:57.090 | 99.99th=[ 5669] 00:33:57.090 bw ( KiB/s): min=19760, max=20656, per=24.84%, avg=20286.22, stdev=326.38, samples=9 00:33:57.090 iops : min= 2470, max= 2582, avg=2535.78, stdev=40.80, samples=9 00:33:57.090 lat (usec) : 1000=0.04% 00:33:57.090 lat (msec) : 2=0.84%, 4=96.22%, 10=2.90% 00:33:57.090 cpu : usr=96.76%, sys=2.92%, ctx=7, majf=0, minf=9 00:33:57.090 IO depths : 1=0.3%, 2=5.1%, 4=66.9%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.090 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.090 issued rwts: total=12662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.090 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:57.090 filename1: (groupid=0, jobs=1): err= 0: pid=1724745: Sun Nov 17 14:43:45 2024 00:33:57.090 read: IOPS=2600, BW=20.3MiB/s (21.3MB/s)(102MiB/5042msec) 00:33:57.090 slat (nsec): min=6129, max=60782, avg=12547.98, stdev=6523.25 00:33:57.090 clat (usec): min=886, max=42903, avg=3029.56, stdev=876.22 00:33:57.090 lat (usec): min=893, max=42910, avg=3042.11, stdev=876.20 00:33:57.090 clat percentiles (usec): 00:33:57.090 | 1.00th=[ 1975], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2737], 00:33:57.090 | 30.00th=[ 2900], 40.00th=[ 2999], 50.00th=[ 3064], 60.00th=[ 3064], 00:33:57.090 | 70.00th=[ 3130], 80.00th=[ 3228], 90.00th=[ 3392], 95.00th=[ 3687], 00:33:57.090 | 99.00th=[ 4424], 99.50th=[ 4686], 99.90th=[ 5276], 99.95th=[ 5538], 00:33:57.090 | 99.99th=[42730] 00:33:57.090 bw ( KiB/s): min=20160, max=21920, per=25.68%, avg=20974.40, stdev=542.98, samples=10 00:33:57.090 iops : min= 2520, max= 2740, avg=2621.80, stdev=67.87, samples=10 00:33:57.090 lat (usec) : 1000=0.02% 00:33:57.090 lat (msec) : 2=1.06%, 4=96.48%, 10=2.40%, 50=0.04% 00:33:57.090 cpu : usr=97.02%, sys=2.66%, ctx=8, majf=0, minf=9 00:33:57.090 IO depths : 1=0.4%, 2=5.7%, 4=65.4%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.090 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.090 issued rwts: total=13114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.090 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:57.090 filename1: (groupid=0, jobs=1): err= 0: pid=1724746: Sun Nov 17 14:43:45 2024 00:33:57.090 read: IOPS=2493, BW=19.5MiB/s (20.4MB/s)(97.4MiB/5001msec) 00:33:57.090 slat (nsec): min=6030, max=68734, avg=13178.63, stdev=10070.72 00:33:57.090 clat (usec): min=650, max=5598, avg=3167.34, stdev=401.70 00:33:57.090 lat (usec): min=657, max=5611, avg=3180.52, stdev=401.55 00:33:57.090 clat percentiles (usec): 00:33:57.090 | 1.00th=[ 2147], 5.00th=[ 2606], 10.00th=[ 2835], 20.00th=[ 2999], 00:33:57.090 | 30.00th=[ 3032], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3163], 00:33:57.090 | 70.00th=[ 3261], 80.00th=[ 3359], 90.00th=[ 3621], 95.00th=[ 3851], 00:33:57.090 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 5342], 99.95th=[ 5473], 00:33:57.090 | 99.99th=[ 5604] 00:33:57.091 bw ( KiB/s): min=19568, max=20248, per=24.42%, avg=19947.56, stdev=246.61, samples=9 00:33:57.091 iops : min= 2446, max= 2531, avg=2493.44, stdev=30.83, samples=9 00:33:57.091 lat (usec) : 750=0.05%, 1000=0.01% 00:33:57.091 lat (msec) : 2=0.54%, 4=95.82%, 10=3.58% 00:33:57.091 cpu : usr=96.72%, sys=2.94%, ctx=8, majf=0, minf=9 00:33:57.091 IO depths : 1=0.1%, 2=4.2%, 4=68.4%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.091 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.091 issued rwts: total=12472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.091 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:57.091 00:33:57.091 Run status group 0 (all jobs): 00:33:57.091 READ: bw=79.8MiB/s (83.6MB/s), 19.5MiB/s-20.7MiB/s (20.4MB/s-21.7MB/s), io=402MiB (422MB), run=5001-5042msec 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.091 00:33:57.091 real 0m24.246s 00:33:57.091 user 4m52.514s 00:33:57.091 sys 0m4.988s 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:57.091 14:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:57.091 ************************************ 00:33:57.091 END TEST fio_dif_rand_params 00:33:57.091 ************************************ 00:33:57.091 14:43:45 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:57.091 14:43:45 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:57.091 14:43:45 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:57.091 14:43:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:57.091 ************************************ 00:33:57.091 START TEST fio_dif_digest 00:33:57.091 ************************************ 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:57.091 bdev_null0 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:57.091 [2024-11-17 14:43:45.956135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:57.091 { 00:33:57.091 "params": { 00:33:57.091 "name": "Nvme$subsystem", 00:33:57.091 "trtype": "$TEST_TRANSPORT", 00:33:57.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:57.091 "adrfam": "ipv4", 00:33:57.091 "trsvcid": "$NVMF_PORT", 00:33:57.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:57.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:57.091 "hdgst": ${hdgst:-false}, 00:33:57.091 "ddgst": ${ddgst:-false} 00:33:57.091 }, 00:33:57.091 "method": "bdev_nvme_attach_controller" 00:33:57.091 } 00:33:57.091 EOF 00:33:57.091 )") 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:33:57.091 14:43:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:57.091 "params": { 00:33:57.091 "name": "Nvme0", 00:33:57.091 "trtype": "tcp", 00:33:57.091 "traddr": "10.0.0.2", 00:33:57.091 "adrfam": "ipv4", 00:33:57.091 "trsvcid": "4420", 00:33:57.091 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:57.091 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:57.091 "hdgst": true, 00:33:57.091 "ddgst": true 00:33:57.091 }, 00:33:57.091 "method": "bdev_nvme_attach_controller" 00:33:57.091 }' 00:33:57.091 14:43:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:57.092 14:43:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:57.092 14:43:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:57.092 14:43:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:57.092 14:43:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:57.092 14:43:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:57.092 14:43:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:57.092 14:43:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:57.092 14:43:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:57.092 14:43:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:57.351 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:57.351 ... 00:33:57.351 fio-3.35 00:33:57.351 Starting 3 threads 00:34:09.561 00:34:09.561 filename0: (groupid=0, jobs=1): err= 0: pid=1725810: Sun Nov 17 14:43:56 2024 00:34:09.561 read: IOPS=303, BW=38.0MiB/s (39.8MB/s)(382MiB/10046msec) 00:34:09.561 slat (nsec): min=6404, max=37583, avg=12137.36, stdev=2590.75 00:34:09.561 clat (usec): min=4322, max=50207, avg=9843.71, stdev=1242.07 00:34:09.561 lat (usec): min=4331, max=50216, avg=9855.85, stdev=1241.97 00:34:09.561 clat percentiles (usec): 00:34:09.561 | 1.00th=[ 7963], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9241], 00:34:09.561 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:34:09.561 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10683], 95.00th=[10945], 00:34:09.561 | 99.00th=[11600], 99.50th=[11731], 99.90th=[12256], 99.95th=[46924], 00:34:09.561 | 99.99th=[50070] 00:34:09.561 bw ( KiB/s): min=37376, max=41728, per=35.84%, avg=39052.80, stdev=1052.16, samples=20 00:34:09.561 iops : min= 292, max= 326, avg=305.10, stdev= 8.22, samples=20 00:34:09.561 lat (msec) : 10=60.01%, 20=39.93%, 50=0.03%, 100=0.03% 00:34:09.561 cpu : usr=95.56%, sys=4.12%, ctx=21, majf=0, minf=0 00:34:09.561 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:09.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.562 issued rwts: total=3053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:09.562 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:09.562 filename0: (groupid=0, jobs=1): err= 0: pid=1725811: Sun Nov 17 14:43:56 2024 00:34:09.562 read: IOPS=279, BW=34.9MiB/s (36.6MB/s)(350MiB/10046msec) 00:34:09.562 slat (nsec): min=6498, max=40577, avg=12640.68, stdev=2222.96 00:34:09.562 clat (usec): min=7729, max=52901, avg=10722.02, stdev=1868.04 00:34:09.562 lat (usec): min=7742, max=52929, avg=10734.66, stdev=1868.25 00:34:09.562 clat percentiles (usec): 00:34:09.562 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[ 9896], 00:34:09.562 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:34:09.562 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[12125], 00:34:09.562 | 99.00th=[12780], 99.50th=[12911], 99.90th=[52167], 99.95th=[52167], 00:34:09.562 | 99.99th=[52691] 00:34:09.562 bw ( KiB/s): min=34304, max=38144, per=32.90%, avg=35852.80, stdev=1103.37, samples=20 00:34:09.562 iops : min= 268, max= 298, avg=280.10, stdev= 8.62, samples=20 00:34:09.562 lat (msec) : 10=22.33%, 20=77.49%, 50=0.07%, 100=0.11% 00:34:09.562 cpu : usr=95.73%, sys=3.95%, ctx=16, majf=0, minf=10 00:34:09.562 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:09.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.562 issued rwts: total=2803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:09.562 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:09.562 filename0: (groupid=0, jobs=1): err= 0: pid=1725812: Sun Nov 17 14:43:56 2024 00:34:09.562 read: IOPS=268, BW=33.6MiB/s (35.2MB/s)(337MiB/10047msec) 00:34:09.562 slat (nsec): min=6503, max=53886, avg=12866.78, stdev=4418.11 00:34:09.562 clat (usec): min=6180, max=50666, avg=11144.61, stdev=1374.66 00:34:09.562 lat (usec): min=6189, max=50681, avg=11157.47, stdev=1374.89 00:34:09.562 clat percentiles (usec): 00:34:09.562 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:34:09.562 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:34:09.562 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649], 00:34:09.562 | 99.00th=[13304], 99.50th=[13435], 99.90th=[13960], 99.95th=[46924], 00:34:09.562 | 99.99th=[50594] 00:34:09.562 bw ( KiB/s): min=32512, max=37120, per=31.66%, avg=34496.00, stdev=1183.02, samples=20 00:34:09.562 iops : min= 254, max= 290, avg=269.50, stdev= 9.24, samples=20 00:34:09.562 lat (msec) : 10=9.34%, 20=90.58%, 50=0.04%, 100=0.04% 00:34:09.562 cpu : usr=95.88%, sys=3.79%, ctx=18, majf=0, minf=13 00:34:09.562 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:09.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.562 issued rwts: total=2697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:09.562 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:09.562 00:34:09.562 Run status group 0 (all jobs): 00:34:09.562 READ: bw=106MiB/s (112MB/s), 33.6MiB/s-38.0MiB/s (35.2MB/s-39.8MB/s), io=1069MiB (1121MB), run=10046-10047msec 00:34:09.562 14:43:57 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:09.562 14:43:57 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:09.562 14:43:57 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:09.562 14:43:57 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:09.562 14:43:57 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:09.562 14:43:57 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:09.562 14:43:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.562 14:43:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:09.562 14:43:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.562 14:43:57 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:09.562 14:43:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.562 14:43:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:09.562 14:43:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.562 00:34:09.562 real 0m11.129s 00:34:09.562 user 0m35.477s 00:34:09.562 sys 0m1.495s 00:34:09.562 14:43:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:09.562 14:43:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:09.562 ************************************ 00:34:09.562 END TEST fio_dif_digest 00:34:09.562 ************************************ 00:34:09.562 14:43:57 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:09.562 14:43:57 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:09.562 14:43:57 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:09.562 14:43:57 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:09.562 14:43:57 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:09.562 14:43:57 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:09.562 14:43:57 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:09.562 14:43:57 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:09.562 rmmod nvme_tcp 00:34:09.562 rmmod nvme_fabrics 00:34:09.562 rmmod nvme_keyring 00:34:09.562 14:43:57 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:09.562 14:43:57 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:09.562 14:43:57 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:09.562 14:43:57 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1717432 ']' 00:34:09.562 14:43:57 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1717432 00:34:09.562 14:43:57 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1717432 ']' 00:34:09.562 14:43:57 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1717432 00:34:09.562 14:43:57 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:09.562 14:43:57 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:09.562 14:43:57 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1717432 00:34:09.562 14:43:57 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:09.562 14:43:57 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:09.562 14:43:57 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1717432' 00:34:09.562 killing process with pid 1717432 00:34:09.562 14:43:57 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1717432 00:34:09.562 14:43:57 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1717432 00:34:09.562 14:43:57 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:09.562 14:43:57 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:10.940 Waiting for block devices as requested 00:34:10.940 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:11.199 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:11.199 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:11.199 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:11.458 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:11.458 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:11.458 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:11.717 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:11.717 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:11.717 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:11.976 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:11.976 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:11.976 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:11.976 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:12.235 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:12.235 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:12.235 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:12.494 14:44:01 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:12.494 14:44:01 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:12.494 14:44:01 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:12.494 14:44:01 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:12.494 14:44:01 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:12.494 14:44:01 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:12.494 14:44:01 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:12.494 14:44:01 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:12.494 14:44:01 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.494 14:44:01 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:12.494 14:44:01 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.401 14:44:03 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:14.401 00:34:14.401 real 1m14.067s 00:34:14.401 user 7m10.239s 00:34:14.401 sys 0m20.231s 00:34:14.401 14:44:03 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:14.401 14:44:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:14.401 ************************************ 00:34:14.401 END TEST nvmf_dif 00:34:14.401 ************************************ 00:34:14.401 14:44:03 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:14.401 14:44:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:14.401 14:44:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:14.401 14:44:03 -- common/autotest_common.sh@10 -- # set +x 00:34:14.661 ************************************ 00:34:14.661 START TEST nvmf_abort_qd_sizes 00:34:14.661 ************************************ 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:14.661 * Looking for test storage... 00:34:14.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:14.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.661 --rc genhtml_branch_coverage=1 00:34:14.661 --rc genhtml_function_coverage=1 00:34:14.661 --rc genhtml_legend=1 00:34:14.661 --rc geninfo_all_blocks=1 00:34:14.661 --rc geninfo_unexecuted_blocks=1 00:34:14.661 00:34:14.661 ' 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:14.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.661 --rc genhtml_branch_coverage=1 00:34:14.661 --rc genhtml_function_coverage=1 00:34:14.661 --rc genhtml_legend=1 00:34:14.661 --rc geninfo_all_blocks=1 00:34:14.661 --rc geninfo_unexecuted_blocks=1 00:34:14.661 00:34:14.661 ' 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:14.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.661 --rc genhtml_branch_coverage=1 00:34:14.661 --rc genhtml_function_coverage=1 00:34:14.661 --rc genhtml_legend=1 00:34:14.661 --rc geninfo_all_blocks=1 00:34:14.661 --rc geninfo_unexecuted_blocks=1 00:34:14.661 00:34:14.661 ' 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:14.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.661 --rc genhtml_branch_coverage=1 00:34:14.661 --rc genhtml_function_coverage=1 00:34:14.661 --rc genhtml_legend=1 00:34:14.661 --rc geninfo_all_blocks=1 00:34:14.661 --rc geninfo_unexecuted_blocks=1 00:34:14.661 00:34:14.661 ' 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.661 14:44:03 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:14.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:14.662 14:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:21.236 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:21.237 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:21.237 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:21.237 Found net devices under 0000:86:00.0: cvl_0_0 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:21.237 Found net devices under 0000:86:00.1: cvl_0_1 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:21.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:21.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:34:21.237 00:34:21.237 --- 10.0.0.2 ping statistics --- 00:34:21.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:21.237 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:21.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:21.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:34:21.237 00:34:21.237 --- 10.0.0.1 ping statistics --- 00:34:21.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:21.237 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:21.237 14:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:23.775 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:23.775 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:23.775 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:23.775 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:23.775 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:23.775 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:23.775 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:23.775 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:23.775 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:23.775 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:23.775 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:23.775 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:23.775 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:23.775 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:23.775 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:23.775 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:24.345 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:24.345 14:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:24.345 14:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:24.345 14:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:24.345 14:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:24.345 14:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:24.345 14:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:24.604 14:44:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:24.604 14:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:24.604 14:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:24.604 14:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:24.604 14:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1733740 00:34:24.604 14:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:24.604 14:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1733740 00:34:24.604 14:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1733740 ']' 00:34:24.604 14:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:24.604 14:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:24.604 14:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:24.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:24.604 14:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:24.604 14:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:24.604 [2024-11-17 14:44:13.656003] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:34:24.604 [2024-11-17 14:44:13.656046] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:24.604 [2024-11-17 14:44:13.736288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:24.604 [2024-11-17 14:44:13.780160] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:24.604 [2024-11-17 14:44:13.780199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:24.604 [2024-11-17 14:44:13.780207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:24.604 [2024-11-17 14:44:13.780213] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:24.604 [2024-11-17 14:44:13.780219] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:24.604 [2024-11-17 14:44:13.781743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:24.604 [2024-11-17 14:44:13.781851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:24.604 [2024-11-17 14:44:13.781958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:24.604 [2024-11-17 14:44:13.781958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:24.864 14:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:24.864 14:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:24.864 14:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:24.864 14:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:24.864 14:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:24.864 14:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:24.864 14:44:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:24.864 14:44:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:24.865 14:44:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:24.865 14:44:13 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:24.865 14:44:13 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:24.865 14:44:13 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:24.865 14:44:13 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:24.865 14:44:13 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:24.865 14:44:13 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:24.865 14:44:13 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:24.865 14:44:13 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:24.865 14:44:13 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:24.865 14:44:13 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:24.865 14:44:13 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:24.865 14:44:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:24.865 14:44:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:24.865 14:44:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:24.865 14:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:24.865 14:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:24.865 14:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:24.865 ************************************ 00:34:24.865 START TEST spdk_target_abort 00:34:24.865 ************************************ 00:34:24.865 14:44:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:24.865 14:44:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:24.865 14:44:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:24.865 14:44:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.865 14:44:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:28.159 spdk_targetn1 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:28.159 [2024-11-17 14:44:16.794365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:28.159 [2024-11-17 14:44:16.838578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:28.159 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:28.160 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:28.160 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:28.160 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:28.160 14:44:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:31.452 Initializing NVMe Controllers 00:34:31.452 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:31.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:31.452 Initialization complete. Launching workers. 00:34:31.452 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17881, failed: 0 00:34:31.452 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1323, failed to submit 16558 00:34:31.452 success 764, unsuccessful 559, failed 0 00:34:31.452 14:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:31.452 14:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:34.742 Initializing NVMe Controllers 00:34:34.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:34.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:34.742 Initialization complete. Launching workers. 00:34:34.742 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8546, failed: 0 00:34:34.742 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1245, failed to submit 7301 00:34:34.742 success 302, unsuccessful 943, failed 0 00:34:34.742 14:44:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:34.742 14:44:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:38.033 Initializing NVMe Controllers 00:34:38.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:38.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:38.033 Initialization complete. Launching workers. 00:34:38.033 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37422, failed: 0 00:34:38.033 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2761, failed to submit 34661 00:34:38.034 success 601, unsuccessful 2160, failed 0 00:34:38.034 14:44:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:38.034 14:44:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.034 14:44:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:38.034 14:44:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.034 14:44:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:38.034 14:44:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.034 14:44:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:38.972 14:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.972 14:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1733740 00:34:38.972 14:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1733740 ']' 00:34:38.972 14:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1733740 00:34:38.972 14:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:38.972 14:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:38.972 14:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1733740 00:34:38.972 14:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:38.972 14:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:38.972 14:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1733740' 00:34:38.972 killing process with pid 1733740 00:34:38.972 14:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1733740 00:34:38.972 14:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1733740 00:34:39.232 00:34:39.232 real 0m14.321s 00:34:39.232 user 0m54.461s 00:34:39.232 sys 0m2.719s 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:39.232 ************************************ 00:34:39.232 END TEST spdk_target_abort 00:34:39.232 ************************************ 00:34:39.232 14:44:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:39.232 14:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:39.232 14:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:39.232 14:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:39.232 ************************************ 00:34:39.232 START TEST kernel_target_abort 00:34:39.232 ************************************ 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:39.232 14:44:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:42.524 Waiting for block devices as requested 00:34:42.524 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:42.524 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:42.524 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:42.524 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:42.524 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:42.524 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:42.524 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:42.524 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:42.784 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:42.784 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:42.784 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:43.043 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:43.043 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:43.043 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:43.043 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:43.303 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:43.303 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:43.303 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:43.303 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:43.303 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:43.303 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:43.303 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:43.303 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:43.303 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:43.303 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:43.303 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:43.563 No valid GPT data, bailing 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:43.563 00:34:43.563 Discovery Log Number of Records 2, Generation counter 2 00:34:43.563 =====Discovery Log Entry 0====== 00:34:43.563 trtype: tcp 00:34:43.563 adrfam: ipv4 00:34:43.563 subtype: current discovery subsystem 00:34:43.563 treq: not specified, sq flow control disable supported 00:34:43.563 portid: 1 00:34:43.563 trsvcid: 4420 00:34:43.563 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:43.563 traddr: 10.0.0.1 00:34:43.563 eflags: none 00:34:43.563 sectype: none 00:34:43.563 =====Discovery Log Entry 1====== 00:34:43.563 trtype: tcp 00:34:43.563 adrfam: ipv4 00:34:43.563 subtype: nvme subsystem 00:34:43.563 treq: not specified, sq flow control disable supported 00:34:43.563 portid: 1 00:34:43.563 trsvcid: 4420 00:34:43.563 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:43.563 traddr: 10.0.0.1 00:34:43.563 eflags: none 00:34:43.563 sectype: none 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:43.563 14:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:46.853 Initializing NVMe Controllers 00:34:46.853 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:46.853 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:46.853 Initialization complete. Launching workers. 00:34:46.853 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93946, failed: 0 00:34:46.853 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 93946, failed to submit 0 00:34:46.853 success 0, unsuccessful 93946, failed 0 00:34:46.853 14:44:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:46.853 14:44:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:50.141 Initializing NVMe Controllers 00:34:50.141 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:50.141 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:50.141 Initialization complete. Launching workers. 00:34:50.141 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 143954, failed: 0 00:34:50.141 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36086, failed to submit 107868 00:34:50.141 success 0, unsuccessful 36086, failed 0 00:34:50.141 14:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:50.141 14:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:53.430 Initializing NVMe Controllers 00:34:53.430 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:53.430 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:53.430 Initialization complete. Launching workers. 00:34:53.430 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 136508, failed: 0 00:34:53.430 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34194, failed to submit 102314 00:34:53.430 success 0, unsuccessful 34194, failed 0 00:34:53.430 14:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:53.430 14:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:53.430 14:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:34:53.430 14:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:53.430 14:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:53.430 14:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:53.430 14:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:53.430 14:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:53.430 14:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:53.430 14:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:55.965 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:55.965 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:55.965 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:55.965 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:55.965 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:55.965 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:55.965 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:55.965 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:55.965 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:55.965 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:55.965 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:55.965 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:55.965 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:55.965 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:55.965 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:55.965 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:56.986 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:56.986 00:34:56.986 real 0m17.584s 00:34:56.986 user 0m9.162s 00:34:56.986 sys 0m5.075s 00:34:56.986 14:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:56.986 14:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:56.986 ************************************ 00:34:56.986 END TEST kernel_target_abort 00:34:56.986 ************************************ 00:34:56.986 14:44:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:56.986 14:44:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:56.986 14:44:45 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:56.986 14:44:45 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:34:56.986 14:44:45 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:56.986 14:44:45 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:34:56.986 14:44:45 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:56.986 14:44:45 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:56.986 rmmod nvme_tcp 00:34:56.986 rmmod nvme_fabrics 00:34:56.986 rmmod nvme_keyring 00:34:56.986 14:44:46 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:56.986 14:44:46 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:34:56.986 14:44:46 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:34:56.986 14:44:46 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1733740 ']' 00:34:56.986 14:44:46 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1733740 00:34:56.986 14:44:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1733740 ']' 00:34:56.986 14:44:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1733740 00:34:56.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1733740) - No such process 00:34:56.986 14:44:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1733740 is not found' 00:34:56.986 Process with pid 1733740 is not found 00:34:56.986 14:44:46 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:56.986 14:44:46 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:59.598 Waiting for block devices as requested 00:34:59.598 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:59.857 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:59.857 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:59.857 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:00.118 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:00.118 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:00.118 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:00.379 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:00.379 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:00.379 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:00.379 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:00.637 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:00.637 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:00.637 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:00.896 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:00.896 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:00.896 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:01.156 14:44:50 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:01.156 14:44:50 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:01.156 14:44:50 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:01.156 14:44:50 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:01.156 14:44:50 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:01.156 14:44:50 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:01.156 14:44:50 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:01.156 14:44:50 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:01.156 14:44:50 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.156 14:44:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:01.156 14:44:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.065 14:44:52 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:03.065 00:35:03.065 real 0m48.560s 00:35:03.065 user 1m7.984s 00:35:03.065 sys 0m16.511s 00:35:03.065 14:44:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:03.065 14:44:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:03.065 ************************************ 00:35:03.065 END TEST nvmf_abort_qd_sizes 00:35:03.065 ************************************ 00:35:03.065 14:44:52 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:03.065 14:44:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:03.065 14:44:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:03.065 14:44:52 -- common/autotest_common.sh@10 -- # set +x 00:35:03.065 ************************************ 00:35:03.065 START TEST keyring_file 00:35:03.065 ************************************ 00:35:03.065 14:44:52 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:03.325 * Looking for test storage... 00:35:03.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:03.325 14:44:52 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:03.325 14:44:52 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:35:03.325 14:44:52 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:03.325 14:44:52 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:03.325 14:44:52 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:03.325 14:44:52 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:03.325 14:44:52 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:03.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.325 --rc genhtml_branch_coverage=1 00:35:03.325 --rc genhtml_function_coverage=1 00:35:03.325 --rc genhtml_legend=1 00:35:03.325 --rc geninfo_all_blocks=1 00:35:03.325 --rc geninfo_unexecuted_blocks=1 00:35:03.325 00:35:03.325 ' 00:35:03.325 14:44:52 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:03.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.325 --rc genhtml_branch_coverage=1 00:35:03.325 --rc genhtml_function_coverage=1 00:35:03.325 --rc genhtml_legend=1 00:35:03.325 --rc geninfo_all_blocks=1 00:35:03.325 --rc geninfo_unexecuted_blocks=1 00:35:03.325 00:35:03.325 ' 00:35:03.325 14:44:52 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:03.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.325 --rc genhtml_branch_coverage=1 00:35:03.325 --rc genhtml_function_coverage=1 00:35:03.325 --rc genhtml_legend=1 00:35:03.325 --rc geninfo_all_blocks=1 00:35:03.325 --rc geninfo_unexecuted_blocks=1 00:35:03.325 00:35:03.325 ' 00:35:03.325 14:44:52 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:03.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.325 --rc genhtml_branch_coverage=1 00:35:03.325 --rc genhtml_function_coverage=1 00:35:03.325 --rc genhtml_legend=1 00:35:03.325 --rc geninfo_all_blocks=1 00:35:03.325 --rc geninfo_unexecuted_blocks=1 00:35:03.325 00:35:03.325 ' 00:35:03.326 14:44:52 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:03.326 14:44:52 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:03.326 14:44:52 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:03.326 14:44:52 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:03.326 14:44:52 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:03.326 14:44:52 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:03.326 14:44:52 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.326 14:44:52 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.326 14:44:52 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.326 14:44:52 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:03.326 14:44:52 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:03.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:03.326 14:44:52 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:03.326 14:44:52 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:03.326 14:44:52 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:03.326 14:44:52 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:03.326 14:44:52 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:03.326 14:44:52 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:03.326 14:44:52 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:03.326 14:44:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:03.326 14:44:52 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:03.326 14:44:52 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:03.326 14:44:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:03.326 14:44:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:03.326 14:44:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9ZOShdboGW 00:35:03.326 14:44:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:03.326 14:44:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9ZOShdboGW 00:35:03.326 14:44:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9ZOShdboGW 00:35:03.326 14:44:52 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.9ZOShdboGW 00:35:03.326 14:44:52 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:03.326 14:44:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:03.326 14:44:52 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:03.326 14:44:52 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:03.326 14:44:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:03.326 14:44:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:03.326 14:44:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FiSa289Ovy 00:35:03.326 14:44:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:03.326 14:44:52 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:03.586 14:44:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FiSa289Ovy 00:35:03.586 14:44:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FiSa289Ovy 00:35:03.586 14:44:52 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.FiSa289Ovy 00:35:03.586 14:44:52 keyring_file -- keyring/file.sh@30 -- # tgtpid=1742603 00:35:03.586 14:44:52 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:03.586 14:44:52 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1742603 00:35:03.586 14:44:52 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1742603 ']' 00:35:03.586 14:44:52 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:03.586 14:44:52 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:03.586 14:44:52 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:03.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:03.586 14:44:52 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:03.586 14:44:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:03.586 [2024-11-17 14:44:52.637179] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:35:03.586 [2024-11-17 14:44:52.637227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1742603 ] 00:35:03.586 [2024-11-17 14:44:52.709710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.586 [2024-11-17 14:44:52.749868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.845 14:44:52 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:03.845 14:44:52 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:03.845 14:44:52 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:03.845 14:44:52 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.845 14:44:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:03.845 [2024-11-17 14:44:52.974268] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:03.845 null0 00:35:03.845 [2024-11-17 14:44:53.006320] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:03.845 [2024-11-17 14:44:53.006686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:03.845 14:44:53 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.845 14:44:53 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:03.845 14:44:53 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:03.845 14:44:53 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:03.845 14:44:53 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:03.845 14:44:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:03.845 14:44:53 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:03.845 14:44:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:03.845 14:44:53 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:03.845 14:44:53 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.845 14:44:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:03.845 [2024-11-17 14:44:53.034388] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:03.845 request: 00:35:03.845 { 00:35:03.845 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:03.845 "secure_channel": false, 00:35:03.845 "listen_address": { 00:35:03.845 "trtype": "tcp", 00:35:03.845 "traddr": "127.0.0.1", 00:35:03.845 "trsvcid": "4420" 00:35:03.845 }, 00:35:03.845 "method": "nvmf_subsystem_add_listener", 00:35:03.845 "req_id": 1 00:35:03.845 } 00:35:03.845 Got JSON-RPC error response 00:35:03.845 response: 00:35:03.845 { 00:35:03.845 "code": -32602, 00:35:03.845 "message": "Invalid parameters" 00:35:03.846 } 00:35:03.846 14:44:53 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:03.846 14:44:53 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:03.846 14:44:53 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:03.846 14:44:53 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:03.846 14:44:53 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:03.846 14:44:53 keyring_file -- keyring/file.sh@47 -- # bperfpid=1742607 00:35:03.846 14:44:53 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:03.846 14:44:53 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1742607 /var/tmp/bperf.sock 00:35:03.846 14:44:53 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1742607 ']' 00:35:03.846 14:44:53 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:03.846 14:44:53 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:03.846 14:44:53 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:03.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:03.846 14:44:53 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:03.846 14:44:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:04.105 [2024-11-17 14:44:53.088089] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:35:04.105 [2024-11-17 14:44:53.088130] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1742607 ] 00:35:04.105 [2024-11-17 14:44:53.161621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.105 [2024-11-17 14:44:53.202157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:04.105 14:44:53 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:04.105 14:44:53 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:04.105 14:44:53 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9ZOShdboGW 00:35:04.105 14:44:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9ZOShdboGW 00:35:04.364 14:44:53 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.FiSa289Ovy 00:35:04.364 14:44:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.FiSa289Ovy 00:35:04.624 14:44:53 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:04.624 14:44:53 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:04.624 14:44:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:04.624 14:44:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:04.624 14:44:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:04.883 14:44:53 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.9ZOShdboGW == \/\t\m\p\/\t\m\p\.\9\Z\O\S\h\d\b\o\G\W ]] 00:35:04.883 14:44:53 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:04.883 14:44:53 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:04.883 14:44:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:04.883 14:44:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:04.883 14:44:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:04.883 14:44:54 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.FiSa289Ovy == \/\t\m\p\/\t\m\p\.\F\i\S\a\2\8\9\O\v\y ]] 00:35:04.883 14:44:54 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:04.883 14:44:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:04.883 14:44:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:04.883 14:44:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:04.883 14:44:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:04.883 14:44:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:05.142 14:44:54 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:05.142 14:44:54 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:05.142 14:44:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:05.142 14:44:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:05.142 14:44:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:05.142 14:44:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:05.142 14:44:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:05.402 14:44:54 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:05.402 14:44:54 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:05.402 14:44:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:05.660 [2024-11-17 14:44:54.648912] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:05.660 nvme0n1 00:35:05.660 14:44:54 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:05.660 14:44:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:05.660 14:44:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:05.660 14:44:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:05.660 14:44:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:05.660 14:44:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:05.919 14:44:54 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:05.919 14:44:54 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:05.919 14:44:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:05.919 14:44:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:05.919 14:44:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:05.919 14:44:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:05.919 14:44:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.178 14:44:55 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:06.179 14:44:55 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:06.179 Running I/O for 1 seconds... 00:35:07.118 18714.00 IOPS, 73.10 MiB/s 00:35:07.119 Latency(us) 00:35:07.119 [2024-11-17T13:44:56.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.119 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:07.119 nvme0n1 : 1.00 18762.08 73.29 0.00 0.00 6809.61 2792.40 18692.01 00:35:07.119 [2024-11-17T13:44:56.344Z] =================================================================================================================== 00:35:07.119 [2024-11-17T13:44:56.344Z] Total : 18762.08 73.29 0.00 0.00 6809.61 2792.40 18692.01 00:35:07.119 { 00:35:07.119 "results": [ 00:35:07.119 { 00:35:07.119 "job": "nvme0n1", 00:35:07.119 "core_mask": "0x2", 00:35:07.119 "workload": "randrw", 00:35:07.119 "percentage": 50, 00:35:07.119 "status": "finished", 00:35:07.119 "queue_depth": 128, 00:35:07.119 "io_size": 4096, 00:35:07.119 "runtime": 1.004313, 00:35:07.119 "iops": 18762.079152614773, 00:35:07.119 "mibps": 73.28937168990146, 00:35:07.119 "io_failed": 0, 00:35:07.119 "io_timeout": 0, 00:35:07.119 "avg_latency_us": 6809.606308604971, 00:35:07.119 "min_latency_us": 2792.4034782608696, 00:35:07.119 "max_latency_us": 18692.006956521738 00:35:07.119 } 00:35:07.119 ], 00:35:07.119 "core_count": 1 00:35:07.119 } 00:35:07.119 14:44:56 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:07.119 14:44:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:07.377 14:44:56 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:07.377 14:44:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:07.377 14:44:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:07.377 14:44:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.377 14:44:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:07.377 14:44:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.636 14:44:56 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:07.637 14:44:56 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:07.637 14:44:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:07.637 14:44:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:07.637 14:44:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.637 14:44:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:07.637 14:44:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.896 14:44:56 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:07.896 14:44:56 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:07.896 14:44:56 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:07.896 14:44:56 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:07.896 14:44:56 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:07.896 14:44:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:07.896 14:44:56 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:07.896 14:44:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:07.896 14:44:56 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:07.896 14:44:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:07.896 [2024-11-17 14:44:57.059506] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:07.896 [2024-11-17 14:44:57.060201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1348d00 (107): Transport endpoint is not connected 00:35:07.896 [2024-11-17 14:44:57.061194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1348d00 (9): Bad file descriptor 00:35:07.896 [2024-11-17 14:44:57.062196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:07.896 [2024-11-17 14:44:57.062205] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:07.896 [2024-11-17 14:44:57.062212] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:07.896 [2024-11-17 14:44:57.062220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:07.896 request: 00:35:07.896 { 00:35:07.896 "name": "nvme0", 00:35:07.896 "trtype": "tcp", 00:35:07.896 "traddr": "127.0.0.1", 00:35:07.896 "adrfam": "ipv4", 00:35:07.896 "trsvcid": "4420", 00:35:07.896 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:07.896 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:07.896 "prchk_reftag": false, 00:35:07.896 "prchk_guard": false, 00:35:07.896 "hdgst": false, 00:35:07.896 "ddgst": false, 00:35:07.896 "psk": "key1", 00:35:07.896 "allow_unrecognized_csi": false, 00:35:07.896 "method": "bdev_nvme_attach_controller", 00:35:07.896 "req_id": 1 00:35:07.896 } 00:35:07.896 Got JSON-RPC error response 00:35:07.896 response: 00:35:07.896 { 00:35:07.896 "code": -5, 00:35:07.896 "message": "Input/output error" 00:35:07.896 } 00:35:07.896 14:44:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:07.896 14:44:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:07.896 14:44:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:07.896 14:44:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:07.896 14:44:57 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:07.896 14:44:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:07.896 14:44:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:07.896 14:44:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.896 14:44:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:07.897 14:44:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:08.156 14:44:57 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:08.156 14:44:57 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:08.156 14:44:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:08.156 14:44:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:08.156 14:44:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:08.156 14:44:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:08.156 14:44:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:08.415 14:44:57 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:08.415 14:44:57 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:08.415 14:44:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:08.675 14:44:57 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:08.675 14:44:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:08.675 14:44:57 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:08.675 14:44:57 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:08.675 14:44:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:08.934 14:44:58 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:08.934 14:44:58 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.9ZOShdboGW 00:35:08.934 14:44:58 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.9ZOShdboGW 00:35:08.934 14:44:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:08.934 14:44:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.9ZOShdboGW 00:35:08.934 14:44:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:08.934 14:44:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:08.934 14:44:58 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:08.934 14:44:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:08.934 14:44:58 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9ZOShdboGW 00:35:08.935 14:44:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9ZOShdboGW 00:35:09.195 [2024-11-17 14:44:58.271413] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.9ZOShdboGW': 0100660 00:35:09.195 [2024-11-17 14:44:58.271439] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:09.195 request: 00:35:09.195 { 00:35:09.195 "name": "key0", 00:35:09.195 "path": "/tmp/tmp.9ZOShdboGW", 00:35:09.195 "method": "keyring_file_add_key", 00:35:09.195 "req_id": 1 00:35:09.195 } 00:35:09.195 Got JSON-RPC error response 00:35:09.195 response: 00:35:09.195 { 00:35:09.195 "code": -1, 00:35:09.195 "message": "Operation not permitted" 00:35:09.195 } 00:35:09.195 14:44:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:09.195 14:44:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:09.195 14:44:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:09.195 14:44:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:09.195 14:44:58 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.9ZOShdboGW 00:35:09.195 14:44:58 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9ZOShdboGW 00:35:09.195 14:44:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9ZOShdboGW 00:35:09.455 14:44:58 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.9ZOShdboGW 00:35:09.455 14:44:58 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:09.455 14:44:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:09.455 14:44:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:09.455 14:44:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:09.455 14:44:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:09.455 14:44:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.715 14:44:58 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:09.715 14:44:58 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:09.715 14:44:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:09.715 14:44:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:09.715 14:44:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:09.715 14:44:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:09.715 14:44:58 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:09.715 14:44:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:09.715 14:44:58 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:09.715 14:44:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:09.715 [2024-11-17 14:44:58.856957] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.9ZOShdboGW': No such file or directory 00:35:09.715 [2024-11-17 14:44:58.856975] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:09.715 [2024-11-17 14:44:58.856991] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:09.715 [2024-11-17 14:44:58.856998] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:09.715 [2024-11-17 14:44:58.857005] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:09.715 [2024-11-17 14:44:58.857011] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:09.715 request: 00:35:09.715 { 00:35:09.715 "name": "nvme0", 00:35:09.715 "trtype": "tcp", 00:35:09.715 "traddr": "127.0.0.1", 00:35:09.715 "adrfam": "ipv4", 00:35:09.715 "trsvcid": "4420", 00:35:09.715 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:09.715 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:09.715 "prchk_reftag": false, 00:35:09.715 "prchk_guard": false, 00:35:09.715 "hdgst": false, 00:35:09.715 "ddgst": false, 00:35:09.715 "psk": "key0", 00:35:09.715 "allow_unrecognized_csi": false, 00:35:09.715 "method": "bdev_nvme_attach_controller", 00:35:09.715 "req_id": 1 00:35:09.715 } 00:35:09.715 Got JSON-RPC error response 00:35:09.715 response: 00:35:09.715 { 00:35:09.715 "code": -19, 00:35:09.715 "message": "No such device" 00:35:09.715 } 00:35:09.715 14:44:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:09.715 14:44:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:09.715 14:44:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:09.715 14:44:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:09.715 14:44:58 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:09.715 14:44:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:09.975 14:44:59 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:09.975 14:44:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:09.975 14:44:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:09.975 14:44:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:09.975 14:44:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:09.975 14:44:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:09.975 14:44:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.In5LjzTq8y 00:35:09.975 14:44:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:09.975 14:44:59 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:09.975 14:44:59 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:09.975 14:44:59 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:09.975 14:44:59 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:09.975 14:44:59 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:09.975 14:44:59 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:09.975 14:44:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.In5LjzTq8y 00:35:09.976 14:44:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.In5LjzTq8y 00:35:09.976 14:44:59 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.In5LjzTq8y 00:35:09.976 14:44:59 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.In5LjzTq8y 00:35:09.976 14:44:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.In5LjzTq8y 00:35:10.234 14:44:59 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:10.234 14:44:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:10.493 nvme0n1 00:35:10.493 14:44:59 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:10.493 14:44:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:10.493 14:44:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:10.493 14:44:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:10.493 14:44:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:10.493 14:44:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.752 14:44:59 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:10.752 14:44:59 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:10.752 14:44:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:10.752 14:44:59 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:10.752 14:44:59 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:10.752 14:44:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:10.752 14:44:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:10.752 14:44:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.010 14:45:00 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:11.010 14:45:00 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:11.010 14:45:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:11.010 14:45:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:11.010 14:45:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:11.010 14:45:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.010 14:45:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:11.269 14:45:00 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:11.269 14:45:00 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:11.269 14:45:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:11.527 14:45:00 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:11.527 14:45:00 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:11.527 14:45:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.786 14:45:00 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:11.786 14:45:00 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.In5LjzTq8y 00:35:11.786 14:45:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.In5LjzTq8y 00:35:11.786 14:45:00 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.FiSa289Ovy 00:35:11.786 14:45:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.FiSa289Ovy 00:35:12.047 14:45:01 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:12.047 14:45:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:12.305 nvme0n1 00:35:12.305 14:45:01 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:12.305 14:45:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:12.565 14:45:01 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:12.565 "subsystems": [ 00:35:12.565 { 00:35:12.565 "subsystem": "keyring", 00:35:12.565 "config": [ 00:35:12.565 { 00:35:12.565 "method": "keyring_file_add_key", 00:35:12.565 "params": { 00:35:12.565 "name": "key0", 00:35:12.565 "path": "/tmp/tmp.In5LjzTq8y" 00:35:12.565 } 00:35:12.565 }, 00:35:12.565 { 00:35:12.565 "method": "keyring_file_add_key", 00:35:12.565 "params": { 00:35:12.565 "name": "key1", 00:35:12.565 "path": "/tmp/tmp.FiSa289Ovy" 00:35:12.565 } 00:35:12.565 } 00:35:12.565 ] 00:35:12.565 }, 00:35:12.565 { 00:35:12.565 "subsystem": "iobuf", 00:35:12.565 "config": [ 00:35:12.565 { 00:35:12.565 "method": "iobuf_set_options", 00:35:12.565 "params": { 00:35:12.565 "small_pool_count": 8192, 00:35:12.565 "large_pool_count": 1024, 00:35:12.565 "small_bufsize": 8192, 00:35:12.565 "large_bufsize": 135168, 00:35:12.565 "enable_numa": false 00:35:12.565 } 00:35:12.565 } 00:35:12.565 ] 00:35:12.565 }, 00:35:12.565 { 00:35:12.565 "subsystem": "sock", 00:35:12.565 "config": [ 00:35:12.565 { 00:35:12.565 "method": "sock_set_default_impl", 00:35:12.565 "params": { 00:35:12.565 "impl_name": "posix" 00:35:12.565 } 00:35:12.565 }, 00:35:12.565 { 00:35:12.565 "method": "sock_impl_set_options", 00:35:12.565 "params": { 00:35:12.565 "impl_name": "ssl", 00:35:12.565 "recv_buf_size": 4096, 00:35:12.565 "send_buf_size": 4096, 00:35:12.565 "enable_recv_pipe": true, 00:35:12.565 "enable_quickack": false, 00:35:12.565 "enable_placement_id": 0, 00:35:12.565 "enable_zerocopy_send_server": true, 00:35:12.565 "enable_zerocopy_send_client": false, 00:35:12.565 "zerocopy_threshold": 0, 00:35:12.565 "tls_version": 0, 00:35:12.565 "enable_ktls": false 00:35:12.565 } 00:35:12.565 }, 00:35:12.565 { 00:35:12.565 "method": "sock_impl_set_options", 00:35:12.565 "params": { 00:35:12.565 "impl_name": "posix", 00:35:12.565 "recv_buf_size": 2097152, 00:35:12.565 "send_buf_size": 2097152, 00:35:12.565 "enable_recv_pipe": true, 00:35:12.565 "enable_quickack": false, 00:35:12.565 "enable_placement_id": 0, 00:35:12.565 "enable_zerocopy_send_server": true, 00:35:12.565 "enable_zerocopy_send_client": false, 00:35:12.565 "zerocopy_threshold": 0, 00:35:12.565 "tls_version": 0, 00:35:12.565 "enable_ktls": false 00:35:12.565 } 00:35:12.565 } 00:35:12.565 ] 00:35:12.565 }, 00:35:12.565 { 00:35:12.565 "subsystem": "vmd", 00:35:12.565 "config": [] 00:35:12.565 }, 00:35:12.565 { 00:35:12.565 "subsystem": "accel", 00:35:12.565 "config": [ 00:35:12.565 { 00:35:12.565 "method": "accel_set_options", 00:35:12.565 "params": { 00:35:12.565 "small_cache_size": 128, 00:35:12.565 "large_cache_size": 16, 00:35:12.565 "task_count": 2048, 00:35:12.565 "sequence_count": 2048, 00:35:12.565 "buf_count": 2048 00:35:12.565 } 00:35:12.565 } 00:35:12.565 ] 00:35:12.565 }, 00:35:12.565 { 00:35:12.565 "subsystem": "bdev", 00:35:12.565 "config": [ 00:35:12.565 { 00:35:12.565 "method": "bdev_set_options", 00:35:12.565 "params": { 00:35:12.565 "bdev_io_pool_size": 65535, 00:35:12.565 "bdev_io_cache_size": 256, 00:35:12.565 "bdev_auto_examine": true, 00:35:12.565 "iobuf_small_cache_size": 128, 00:35:12.565 "iobuf_large_cache_size": 16 00:35:12.565 } 00:35:12.565 }, 00:35:12.565 { 00:35:12.565 "method": "bdev_raid_set_options", 00:35:12.565 "params": { 00:35:12.565 "process_window_size_kb": 1024, 00:35:12.565 "process_max_bandwidth_mb_sec": 0 00:35:12.565 } 00:35:12.565 }, 00:35:12.565 { 00:35:12.565 "method": "bdev_iscsi_set_options", 00:35:12.565 "params": { 00:35:12.565 "timeout_sec": 30 00:35:12.565 } 00:35:12.565 }, 00:35:12.565 { 00:35:12.565 "method": "bdev_nvme_set_options", 00:35:12.565 "params": { 00:35:12.565 "action_on_timeout": "none", 00:35:12.565 "timeout_us": 0, 00:35:12.565 "timeout_admin_us": 0, 00:35:12.565 "keep_alive_timeout_ms": 10000, 00:35:12.565 "arbitration_burst": 0, 00:35:12.565 "low_priority_weight": 0, 00:35:12.565 "medium_priority_weight": 0, 00:35:12.565 "high_priority_weight": 0, 00:35:12.565 "nvme_adminq_poll_period_us": 10000, 00:35:12.565 "nvme_ioq_poll_period_us": 0, 00:35:12.565 "io_queue_requests": 512, 00:35:12.565 "delay_cmd_submit": true, 00:35:12.565 "transport_retry_count": 4, 00:35:12.565 "bdev_retry_count": 3, 00:35:12.565 "transport_ack_timeout": 0, 00:35:12.565 "ctrlr_loss_timeout_sec": 0, 00:35:12.565 "reconnect_delay_sec": 0, 00:35:12.565 "fast_io_fail_timeout_sec": 0, 00:35:12.565 "disable_auto_failback": false, 00:35:12.565 "generate_uuids": false, 00:35:12.565 "transport_tos": 0, 00:35:12.565 "nvme_error_stat": false, 00:35:12.565 "rdma_srq_size": 0, 00:35:12.565 "io_path_stat": false, 00:35:12.565 "allow_accel_sequence": false, 00:35:12.565 "rdma_max_cq_size": 0, 00:35:12.565 "rdma_cm_event_timeout_ms": 0, 00:35:12.565 "dhchap_digests": [ 00:35:12.565 "sha256", 00:35:12.565 "sha384", 00:35:12.565 "sha512" 00:35:12.565 ], 00:35:12.565 "dhchap_dhgroups": [ 00:35:12.565 "null", 00:35:12.565 "ffdhe2048", 00:35:12.565 "ffdhe3072", 00:35:12.565 "ffdhe4096", 00:35:12.565 "ffdhe6144", 00:35:12.565 "ffdhe8192" 00:35:12.565 ] 00:35:12.565 } 00:35:12.565 }, 00:35:12.565 { 00:35:12.565 "method": "bdev_nvme_attach_controller", 00:35:12.565 "params": { 00:35:12.565 "name": "nvme0", 00:35:12.565 "trtype": "TCP", 00:35:12.565 "adrfam": "IPv4", 00:35:12.565 "traddr": "127.0.0.1", 00:35:12.565 "trsvcid": "4420", 00:35:12.565 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:12.565 "prchk_reftag": false, 00:35:12.565 "prchk_guard": false, 00:35:12.565 "ctrlr_loss_timeout_sec": 0, 00:35:12.565 "reconnect_delay_sec": 0, 00:35:12.565 "fast_io_fail_timeout_sec": 0, 00:35:12.565 "psk": "key0", 00:35:12.565 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:12.565 "hdgst": false, 00:35:12.565 "ddgst": false, 00:35:12.565 "multipath": "multipath" 00:35:12.565 } 00:35:12.565 }, 00:35:12.565 { 00:35:12.565 "method": "bdev_nvme_set_hotplug", 00:35:12.565 "params": { 00:35:12.565 "period_us": 100000, 00:35:12.565 "enable": false 00:35:12.565 } 00:35:12.565 }, 00:35:12.565 { 00:35:12.565 "method": "bdev_wait_for_examine" 00:35:12.565 } 00:35:12.565 ] 00:35:12.565 }, 00:35:12.565 { 00:35:12.565 "subsystem": "nbd", 00:35:12.565 "config": [] 00:35:12.565 } 00:35:12.565 ] 00:35:12.565 }' 00:35:12.565 14:45:01 keyring_file -- keyring/file.sh@115 -- # killprocess 1742607 00:35:12.565 14:45:01 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1742607 ']' 00:35:12.565 14:45:01 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1742607 00:35:12.565 14:45:01 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:12.565 14:45:01 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:12.565 14:45:01 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1742607 00:35:12.565 14:45:01 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:12.566 14:45:01 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:12.566 14:45:01 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1742607' 00:35:12.566 killing process with pid 1742607 00:35:12.566 14:45:01 keyring_file -- common/autotest_common.sh@973 -- # kill 1742607 00:35:12.566 Received shutdown signal, test time was about 1.000000 seconds 00:35:12.566 00:35:12.566 Latency(us) 00:35:12.566 [2024-11-17T13:45:01.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:12.566 [2024-11-17T13:45:01.791Z] =================================================================================================================== 00:35:12.566 [2024-11-17T13:45:01.791Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:12.566 14:45:01 keyring_file -- common/autotest_common.sh@978 -- # wait 1742607 00:35:12.826 14:45:01 keyring_file -- keyring/file.sh@118 -- # bperfpid=1744253 00:35:12.826 14:45:01 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1744253 /var/tmp/bperf.sock 00:35:12.826 14:45:01 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1744253 ']' 00:35:12.826 14:45:01 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:12.826 14:45:01 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:12.826 14:45:01 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:12.826 14:45:01 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:12.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:12.826 14:45:01 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:12.826 "subsystems": [ 00:35:12.826 { 00:35:12.826 "subsystem": "keyring", 00:35:12.826 "config": [ 00:35:12.826 { 00:35:12.826 "method": "keyring_file_add_key", 00:35:12.826 "params": { 00:35:12.826 "name": "key0", 00:35:12.826 "path": "/tmp/tmp.In5LjzTq8y" 00:35:12.826 } 00:35:12.826 }, 00:35:12.826 { 00:35:12.826 "method": "keyring_file_add_key", 00:35:12.826 "params": { 00:35:12.826 "name": "key1", 00:35:12.826 "path": "/tmp/tmp.FiSa289Ovy" 00:35:12.826 } 00:35:12.826 } 00:35:12.826 ] 00:35:12.826 }, 00:35:12.826 { 00:35:12.826 "subsystem": "iobuf", 00:35:12.826 "config": [ 00:35:12.826 { 00:35:12.826 "method": "iobuf_set_options", 00:35:12.826 "params": { 00:35:12.826 "small_pool_count": 8192, 00:35:12.826 "large_pool_count": 1024, 00:35:12.826 "small_bufsize": 8192, 00:35:12.826 "large_bufsize": 135168, 00:35:12.826 "enable_numa": false 00:35:12.826 } 00:35:12.826 } 00:35:12.826 ] 00:35:12.826 }, 00:35:12.826 { 00:35:12.826 "subsystem": "sock", 00:35:12.826 "config": [ 00:35:12.826 { 00:35:12.826 "method": "sock_set_default_impl", 00:35:12.826 "params": { 00:35:12.826 "impl_name": "posix" 00:35:12.826 } 00:35:12.826 }, 00:35:12.826 { 00:35:12.826 "method": "sock_impl_set_options", 00:35:12.826 "params": { 00:35:12.826 "impl_name": "ssl", 00:35:12.826 "recv_buf_size": 4096, 00:35:12.826 "send_buf_size": 4096, 00:35:12.826 "enable_recv_pipe": true, 00:35:12.826 "enable_quickack": false, 00:35:12.826 "enable_placement_id": 0, 00:35:12.826 "enable_zerocopy_send_server": true, 00:35:12.826 "enable_zerocopy_send_client": false, 00:35:12.826 "zerocopy_threshold": 0, 00:35:12.826 "tls_version": 0, 00:35:12.826 "enable_ktls": false 00:35:12.826 } 00:35:12.826 }, 00:35:12.826 { 00:35:12.826 "method": "sock_impl_set_options", 00:35:12.826 "params": { 00:35:12.826 "impl_name": "posix", 00:35:12.826 "recv_buf_size": 2097152, 00:35:12.826 "send_buf_size": 2097152, 00:35:12.826 "enable_recv_pipe": true, 00:35:12.826 "enable_quickack": false, 00:35:12.826 "enable_placement_id": 0, 00:35:12.826 "enable_zerocopy_send_server": true, 00:35:12.826 "enable_zerocopy_send_client": false, 00:35:12.826 "zerocopy_threshold": 0, 00:35:12.826 "tls_version": 0, 00:35:12.826 "enable_ktls": false 00:35:12.826 } 00:35:12.826 } 00:35:12.826 ] 00:35:12.826 }, 00:35:12.826 { 00:35:12.826 "subsystem": "vmd", 00:35:12.826 "config": [] 00:35:12.826 }, 00:35:12.826 { 00:35:12.826 "subsystem": "accel", 00:35:12.826 "config": [ 00:35:12.826 { 00:35:12.826 "method": "accel_set_options", 00:35:12.826 "params": { 00:35:12.826 "small_cache_size": 128, 00:35:12.826 "large_cache_size": 16, 00:35:12.826 "task_count": 2048, 00:35:12.826 "sequence_count": 2048, 00:35:12.826 "buf_count": 2048 00:35:12.826 } 00:35:12.826 } 00:35:12.826 ] 00:35:12.826 }, 00:35:12.826 { 00:35:12.826 "subsystem": "bdev", 00:35:12.826 "config": [ 00:35:12.826 { 00:35:12.826 "method": "bdev_set_options", 00:35:12.826 "params": { 00:35:12.826 "bdev_io_pool_size": 65535, 00:35:12.826 "bdev_io_cache_size": 256, 00:35:12.826 "bdev_auto_examine": true, 00:35:12.826 "iobuf_small_cache_size": 128, 00:35:12.826 "iobuf_large_cache_size": 16 00:35:12.826 } 00:35:12.826 }, 00:35:12.826 { 00:35:12.826 "method": "bdev_raid_set_options", 00:35:12.826 "params": { 00:35:12.826 "process_window_size_kb": 1024, 00:35:12.826 "process_max_bandwidth_mb_sec": 0 00:35:12.826 } 00:35:12.826 }, 00:35:12.826 { 00:35:12.826 "method": "bdev_iscsi_set_options", 00:35:12.826 "params": { 00:35:12.826 "timeout_sec": 30 00:35:12.826 } 00:35:12.826 }, 00:35:12.826 { 00:35:12.826 "method": "bdev_nvme_set_options", 00:35:12.826 "params": { 00:35:12.826 "action_on_timeout": "none", 00:35:12.826 "timeout_us": 0, 00:35:12.826 "timeout_admin_us": 0, 00:35:12.826 "keep_alive_timeout_ms": 10000, 00:35:12.826 "arbitration_burst": 0, 00:35:12.826 "low_priority_weight": 0, 00:35:12.826 "medium_priority_weight": 0, 00:35:12.826 "high_priority_weight": 0, 00:35:12.826 "nvme_adminq_poll_period_us": 10000, 00:35:12.826 "nvme_ioq_poll_period_us": 0, 00:35:12.826 "io_queue_requests": 512, 00:35:12.826 "delay_cmd_submit": true, 00:35:12.826 "transport_retry_count": 4, 00:35:12.826 "bdev_retry_count": 3, 00:35:12.826 "transport_ack_timeout": 0, 00:35:12.826 "ctrlr_loss_timeout_sec": 0, 00:35:12.826 "reconnect_delay_sec": 0, 00:35:12.826 "fast_io_fail_timeout_sec": 0, 00:35:12.826 "disable_auto_failback": false, 00:35:12.826 "generate_uuids": false, 00:35:12.826 "transport_tos": 0, 00:35:12.826 "nvme_error_stat": false, 00:35:12.826 "rdma_srq_size": 0, 00:35:12.826 "io_path_stat": false, 00:35:12.826 "allow_accel_sequence": false, 00:35:12.826 "rdma_max_cq_size": 0, 00:35:12.826 "rdma_cm_event_timeout_ms": 0, 00:35:12.826 "dhchap_digests": [ 00:35:12.826 "sha256", 00:35:12.826 "sha384", 00:35:12.826 "sha512" 00:35:12.826 ], 00:35:12.826 "dhchap_dhgroups": [ 00:35:12.826 "null", 00:35:12.826 "ffdhe2048", 00:35:12.826 "ffdhe3072", 00:35:12.826 "ffdhe4096", 00:35:12.826 "ffdhe6144", 00:35:12.826 "ffdhe8192" 00:35:12.826 ] 00:35:12.826 } 00:35:12.826 }, 00:35:12.826 { 00:35:12.826 "method": "bdev_nvme_attach_controller", 00:35:12.826 "params": { 00:35:12.826 "name": "nvme0", 00:35:12.826 "trtype": "TCP", 00:35:12.826 "adrfam": "IPv4", 00:35:12.826 "traddr": "127.0.0.1", 00:35:12.826 "trsvcid": "4420", 00:35:12.826 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:12.826 "prchk_reftag": false, 00:35:12.826 "prchk_guard": false, 00:35:12.826 "ctrlr_loss_timeout_sec": 0, 00:35:12.826 "reconnect_delay_sec": 0, 00:35:12.826 "fast_io_fail_timeout_sec": 0, 00:35:12.826 "psk": "key0", 00:35:12.826 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:12.826 "hdgst": false, 00:35:12.826 "ddgst": false, 00:35:12.826 "multipath": "multipath" 00:35:12.826 } 00:35:12.826 }, 00:35:12.826 { 00:35:12.826 "method": "bdev_nvme_set_hotplug", 00:35:12.826 "params": { 00:35:12.826 "period_us": 100000, 00:35:12.826 "enable": false 00:35:12.826 } 00:35:12.826 }, 00:35:12.826 { 00:35:12.826 "method": "bdev_wait_for_examine" 00:35:12.826 } 00:35:12.826 ] 00:35:12.826 }, 00:35:12.826 { 00:35:12.826 "subsystem": "nbd", 00:35:12.826 "config": [] 00:35:12.826 } 00:35:12.826 ] 00:35:12.826 }' 00:35:12.826 14:45:01 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:12.827 14:45:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:12.827 [2024-11-17 14:45:01.911613] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:35:12.827 [2024-11-17 14:45:01.911664] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744253 ] 00:35:12.827 [2024-11-17 14:45:01.986207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.827 [2024-11-17 14:45:02.028680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:13.085 [2024-11-17 14:45:02.189904] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:13.652 14:45:02 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:13.652 14:45:02 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:13.652 14:45:02 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:13.652 14:45:02 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:13.652 14:45:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:13.911 14:45:02 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:13.911 14:45:02 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:13.911 14:45:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:13.911 14:45:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:13.911 14:45:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:13.911 14:45:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:13.911 14:45:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:14.171 14:45:03 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:14.171 14:45:03 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:14.171 14:45:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:14.171 14:45:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:14.171 14:45:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:14.171 14:45:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:14.171 14:45:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:14.171 14:45:03 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:14.171 14:45:03 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:14.171 14:45:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:14.171 14:45:03 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:14.430 14:45:03 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:14.430 14:45:03 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:14.430 14:45:03 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.In5LjzTq8y /tmp/tmp.FiSa289Ovy 00:35:14.430 14:45:03 keyring_file -- keyring/file.sh@20 -- # killprocess 1744253 00:35:14.430 14:45:03 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1744253 ']' 00:35:14.430 14:45:03 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1744253 00:35:14.430 14:45:03 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:14.430 14:45:03 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:14.430 14:45:03 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1744253 00:35:14.430 14:45:03 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:14.430 14:45:03 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:14.430 14:45:03 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1744253' 00:35:14.430 killing process with pid 1744253 00:35:14.430 14:45:03 keyring_file -- common/autotest_common.sh@973 -- # kill 1744253 00:35:14.430 Received shutdown signal, test time was about 1.000000 seconds 00:35:14.430 00:35:14.430 Latency(us) 00:35:14.430 [2024-11-17T13:45:03.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.430 [2024-11-17T13:45:03.655Z] =================================================================================================================== 00:35:14.430 [2024-11-17T13:45:03.655Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:14.430 14:45:03 keyring_file -- common/autotest_common.sh@978 -- # wait 1744253 00:35:14.690 14:45:03 keyring_file -- keyring/file.sh@21 -- # killprocess 1742603 00:35:14.690 14:45:03 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1742603 ']' 00:35:14.690 14:45:03 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1742603 00:35:14.690 14:45:03 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:14.690 14:45:03 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:14.690 14:45:03 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1742603 00:35:14.690 14:45:03 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:14.690 14:45:03 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:14.690 14:45:03 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1742603' 00:35:14.690 killing process with pid 1742603 00:35:14.690 14:45:03 keyring_file -- common/autotest_common.sh@973 -- # kill 1742603 00:35:14.690 14:45:03 keyring_file -- common/autotest_common.sh@978 -- # wait 1742603 00:35:14.949 00:35:14.949 real 0m11.823s 00:35:14.949 user 0m29.398s 00:35:14.949 sys 0m2.671s 00:35:14.949 14:45:04 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:14.949 14:45:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:14.949 ************************************ 00:35:14.949 END TEST keyring_file 00:35:14.949 ************************************ 00:35:14.949 14:45:04 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:14.949 14:45:04 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:14.949 14:45:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:14.949 14:45:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:14.949 14:45:04 -- common/autotest_common.sh@10 -- # set +x 00:35:14.949 ************************************ 00:35:14.949 START TEST keyring_linux 00:35:14.949 ************************************ 00:35:14.949 14:45:04 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:14.949 Joined session keyring: 691100850 00:35:15.209 * Looking for test storage... 00:35:15.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:15.209 14:45:04 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:15.209 14:45:04 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:35:15.209 14:45:04 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:15.209 14:45:04 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:15.209 14:45:04 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:15.209 14:45:04 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:15.209 14:45:04 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:15.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.209 --rc genhtml_branch_coverage=1 00:35:15.209 --rc genhtml_function_coverage=1 00:35:15.209 --rc genhtml_legend=1 00:35:15.209 --rc geninfo_all_blocks=1 00:35:15.209 --rc geninfo_unexecuted_blocks=1 00:35:15.209 00:35:15.209 ' 00:35:15.209 14:45:04 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:15.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.209 --rc genhtml_branch_coverage=1 00:35:15.209 --rc genhtml_function_coverage=1 00:35:15.209 --rc genhtml_legend=1 00:35:15.209 --rc geninfo_all_blocks=1 00:35:15.209 --rc geninfo_unexecuted_blocks=1 00:35:15.209 00:35:15.209 ' 00:35:15.209 14:45:04 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:15.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.209 --rc genhtml_branch_coverage=1 00:35:15.209 --rc genhtml_function_coverage=1 00:35:15.209 --rc genhtml_legend=1 00:35:15.209 --rc geninfo_all_blocks=1 00:35:15.209 --rc geninfo_unexecuted_blocks=1 00:35:15.209 00:35:15.209 ' 00:35:15.209 14:45:04 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:15.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.209 --rc genhtml_branch_coverage=1 00:35:15.209 --rc genhtml_function_coverage=1 00:35:15.209 --rc genhtml_legend=1 00:35:15.209 --rc geninfo_all_blocks=1 00:35:15.209 --rc geninfo_unexecuted_blocks=1 00:35:15.209 00:35:15.209 ' 00:35:15.209 14:45:04 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:15.209 14:45:04 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:15.209 14:45:04 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:15.209 14:45:04 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:15.209 14:45:04 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:15.209 14:45:04 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:15.209 14:45:04 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:15.209 14:45:04 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:15.209 14:45:04 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:15.209 14:45:04 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:15.209 14:45:04 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:15.209 14:45:04 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:15.209 14:45:04 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:15.209 14:45:04 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:15.209 14:45:04 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:15.209 14:45:04 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:15.210 14:45:04 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:15.210 14:45:04 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:15.210 14:45:04 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:15.210 14:45:04 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:15.210 14:45:04 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.210 14:45:04 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.210 14:45:04 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.210 14:45:04 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:15.210 14:45:04 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:15.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:15.210 14:45:04 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:15.210 14:45:04 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:15.210 14:45:04 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:15.210 14:45:04 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:15.210 14:45:04 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:15.210 14:45:04 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:15.210 14:45:04 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:15.210 14:45:04 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:15.210 14:45:04 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:15.210 14:45:04 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:15.210 14:45:04 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:15.210 14:45:04 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:15.210 14:45:04 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:15.210 14:45:04 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:15.210 14:45:04 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:15.210 /tmp/:spdk-test:key0 00:35:15.210 14:45:04 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:15.210 14:45:04 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:15.210 14:45:04 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:15.210 14:45:04 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:15.210 14:45:04 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:15.210 14:45:04 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:15.210 14:45:04 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:15.210 14:45:04 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:15.469 14:45:04 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:15.469 14:45:04 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:15.469 /tmp/:spdk-test:key1 00:35:15.469 14:45:04 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1744805 00:35:15.469 14:45:04 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1744805 00:35:15.469 14:45:04 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:15.469 14:45:04 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1744805 ']' 00:35:15.469 14:45:04 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:15.469 14:45:04 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:15.469 14:45:04 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:15.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:15.469 14:45:04 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:15.469 14:45:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:15.469 [2024-11-17 14:45:04.505909] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:35:15.469 [2024-11-17 14:45:04.505959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744805 ] 00:35:15.469 [2024-11-17 14:45:04.582229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.469 [2024-11-17 14:45:04.624870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:15.728 14:45:04 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.728 14:45:04 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:15.728 14:45:04 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:15.728 14:45:04 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.728 14:45:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:15.728 [2024-11-17 14:45:04.834777] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:15.728 null0 00:35:15.728 [2024-11-17 14:45:04.866839] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:15.728 [2024-11-17 14:45:04.867193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:15.728 14:45:04 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.728 14:45:04 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:15.728 963517540 00:35:15.728 14:45:04 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:15.728 937505701 00:35:15.728 14:45:04 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1744816 00:35:15.728 14:45:04 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1744816 /var/tmp/bperf.sock 00:35:15.728 14:45:04 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:15.728 14:45:04 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1744816 ']' 00:35:15.728 14:45:04 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:15.728 14:45:04 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:15.728 14:45:04 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:15.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:15.728 14:45:04 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:15.728 14:45:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:15.728 [2024-11-17 14:45:04.941933] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:35:15.728 [2024-11-17 14:45:04.941978] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744816 ] 00:35:15.988 [2024-11-17 14:45:05.017553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.988 [2024-11-17 14:45:05.059811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.988 14:45:05 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.988 14:45:05 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:15.988 14:45:05 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:15.988 14:45:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:16.248 14:45:05 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:16.248 14:45:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:16.508 14:45:05 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:16.508 14:45:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:16.767 [2024-11-17 14:45:05.735861] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:16.767 nvme0n1 00:35:16.767 14:45:05 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:16.767 14:45:05 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:16.767 14:45:05 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:16.767 14:45:05 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:16.767 14:45:05 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:16.767 14:45:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:17.026 14:45:06 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:17.026 14:45:06 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:17.026 14:45:06 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:17.026 14:45:06 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:17.026 14:45:06 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:17.026 14:45:06 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:17.026 14:45:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:17.026 14:45:06 keyring_linux -- keyring/linux.sh@25 -- # sn=963517540 00:35:17.026 14:45:06 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:17.026 14:45:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:17.026 14:45:06 keyring_linux -- keyring/linux.sh@26 -- # [[ 963517540 == \9\6\3\5\1\7\5\4\0 ]] 00:35:17.026 14:45:06 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 963517540 00:35:17.026 14:45:06 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:17.026 14:45:06 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:17.290 Running I/O for 1 seconds... 00:35:18.229 21184.00 IOPS, 82.75 MiB/s 00:35:18.229 Latency(us) 00:35:18.229 [2024-11-17T13:45:07.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.229 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:18.229 nvme0n1 : 1.01 21183.02 82.75 0.00 0.00 6022.16 1966.08 7351.43 00:35:18.229 [2024-11-17T13:45:07.454Z] =================================================================================================================== 00:35:18.230 [2024-11-17T13:45:07.455Z] Total : 21183.02 82.75 0.00 0.00 6022.16 1966.08 7351.43 00:35:18.230 { 00:35:18.230 "results": [ 00:35:18.230 { 00:35:18.230 "job": "nvme0n1", 00:35:18.230 "core_mask": "0x2", 00:35:18.230 "workload": "randread", 00:35:18.230 "status": "finished", 00:35:18.230 "queue_depth": 128, 00:35:18.230 "io_size": 4096, 00:35:18.230 "runtime": 1.006089, 00:35:18.230 "iops": 21183.016611850442, 00:35:18.230 "mibps": 82.74615864004079, 00:35:18.230 "io_failed": 0, 00:35:18.230 "io_timeout": 0, 00:35:18.230 "avg_latency_us": 6022.161514558036, 00:35:18.230 "min_latency_us": 1966.08, 00:35:18.230 "max_latency_us": 7351.429565217391 00:35:18.230 } 00:35:18.230 ], 00:35:18.230 "core_count": 1 00:35:18.230 } 00:35:18.230 14:45:07 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:18.230 14:45:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:18.488 14:45:07 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:18.488 14:45:07 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:18.488 14:45:07 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:18.488 14:45:07 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:18.488 14:45:07 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:18.488 14:45:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:18.747 14:45:07 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:18.747 14:45:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:18.747 14:45:07 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:18.747 14:45:07 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:18.747 14:45:07 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:18.747 14:45:07 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:18.747 14:45:07 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:18.747 14:45:07 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:18.747 14:45:07 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:18.747 14:45:07 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:18.747 14:45:07 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:18.747 14:45:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:18.747 [2024-11-17 14:45:07.925820] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:18.747 [2024-11-17 14:45:07.925860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212ba70 (107): Transport endpoint is not connected 00:35:18.747 [2024-11-17 14:45:07.926855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212ba70 (9): Bad file descriptor 00:35:18.747 [2024-11-17 14:45:07.927857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:18.748 [2024-11-17 14:45:07.927866] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:18.748 [2024-11-17 14:45:07.927873] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:18.748 [2024-11-17 14:45:07.927882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:18.748 request: 00:35:18.748 { 00:35:18.748 "name": "nvme0", 00:35:18.748 "trtype": "tcp", 00:35:18.748 "traddr": "127.0.0.1", 00:35:18.748 "adrfam": "ipv4", 00:35:18.748 "trsvcid": "4420", 00:35:18.748 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:18.748 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:18.748 "prchk_reftag": false, 00:35:18.748 "prchk_guard": false, 00:35:18.748 "hdgst": false, 00:35:18.748 "ddgst": false, 00:35:18.748 "psk": ":spdk-test:key1", 00:35:18.748 "allow_unrecognized_csi": false, 00:35:18.748 "method": "bdev_nvme_attach_controller", 00:35:18.748 "req_id": 1 00:35:18.748 } 00:35:18.748 Got JSON-RPC error response 00:35:18.748 response: 00:35:18.748 { 00:35:18.748 "code": -5, 00:35:18.748 "message": "Input/output error" 00:35:18.748 } 00:35:18.748 14:45:07 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:18.748 14:45:07 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:18.748 14:45:07 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:18.748 14:45:07 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:18.748 14:45:07 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:18.748 14:45:07 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:18.748 14:45:07 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:18.748 14:45:07 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:18.748 14:45:07 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:18.748 14:45:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:18.748 14:45:07 keyring_linux -- keyring/linux.sh@33 -- # sn=963517540 00:35:18.748 14:45:07 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 963517540 00:35:18.748 1 links removed 00:35:18.748 14:45:07 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:18.748 14:45:07 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:18.748 14:45:07 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:18.748 14:45:07 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:18.748 14:45:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:18.748 14:45:07 keyring_linux -- keyring/linux.sh@33 -- # sn=937505701 00:35:18.748 14:45:07 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 937505701 00:35:18.748 1 links removed 00:35:18.748 14:45:07 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1744816 00:35:18.748 14:45:07 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1744816 ']' 00:35:18.748 14:45:07 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1744816 00:35:18.748 14:45:07 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:18.748 14:45:07 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:18.748 14:45:07 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1744816 00:35:19.007 14:45:08 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:19.007 14:45:08 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:19.007 14:45:08 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1744816' 00:35:19.007 killing process with pid 1744816 00:35:19.007 14:45:08 keyring_linux -- common/autotest_common.sh@973 -- # kill 1744816 00:35:19.007 Received shutdown signal, test time was about 1.000000 seconds 00:35:19.007 00:35:19.008 Latency(us) 00:35:19.008 [2024-11-17T13:45:08.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.008 [2024-11-17T13:45:08.233Z] =================================================================================================================== 00:35:19.008 [2024-11-17T13:45:08.233Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:19.008 14:45:08 keyring_linux -- common/autotest_common.sh@978 -- # wait 1744816 00:35:19.008 14:45:08 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1744805 00:35:19.008 14:45:08 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1744805 ']' 00:35:19.008 14:45:08 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1744805 00:35:19.008 14:45:08 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:19.008 14:45:08 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:19.008 14:45:08 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1744805 00:35:19.008 14:45:08 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:19.008 14:45:08 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:19.008 14:45:08 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1744805' 00:35:19.008 killing process with pid 1744805 00:35:19.008 14:45:08 keyring_linux -- common/autotest_common.sh@973 -- # kill 1744805 00:35:19.008 14:45:08 keyring_linux -- common/autotest_common.sh@978 -- # wait 1744805 00:35:19.577 00:35:19.577 real 0m4.361s 00:35:19.577 user 0m8.283s 00:35:19.577 sys 0m1.413s 00:35:19.577 14:45:08 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:19.577 14:45:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:19.577 ************************************ 00:35:19.577 END TEST keyring_linux 00:35:19.577 ************************************ 00:35:19.577 14:45:08 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:19.577 14:45:08 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:19.577 14:45:08 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:19.577 14:45:08 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:19.577 14:45:08 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:19.577 14:45:08 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:19.577 14:45:08 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:19.577 14:45:08 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:19.577 14:45:08 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:19.577 14:45:08 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:19.577 14:45:08 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:19.577 14:45:08 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:19.577 14:45:08 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:19.577 14:45:08 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:19.577 14:45:08 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:19.577 14:45:08 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:19.577 14:45:08 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:19.577 14:45:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:19.577 14:45:08 -- common/autotest_common.sh@10 -- # set +x 00:35:19.577 14:45:08 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:19.577 14:45:08 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:19.577 14:45:08 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:19.577 14:45:08 -- common/autotest_common.sh@10 -- # set +x 00:35:24.857 INFO: APP EXITING 00:35:24.857 INFO: killing all VMs 00:35:24.857 INFO: killing vhost app 00:35:24.857 INFO: EXIT DONE 00:35:27.396 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:27.396 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:27.396 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:27.396 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:27.396 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:27.396 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:27.396 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:27.396 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:27.396 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:27.396 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:27.396 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:27.396 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:27.396 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:27.396 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:27.396 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:27.396 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:27.396 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:30.690 Cleaning 00:35:30.690 Removing: /var/run/dpdk/spdk0/config 00:35:30.690 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:30.690 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:30.690 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:30.690 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:30.690 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:30.690 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:30.690 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:30.690 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:30.690 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:30.690 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:30.690 Removing: /var/run/dpdk/spdk1/config 00:35:30.690 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:30.690 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:30.691 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:30.691 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:30.691 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:30.691 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:30.691 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:30.691 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:30.691 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:30.691 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:30.691 Removing: /var/run/dpdk/spdk2/config 00:35:30.691 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:30.691 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:30.691 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:30.691 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:30.691 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:30.691 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:30.691 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:30.691 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:30.691 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:30.691 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:30.691 Removing: /var/run/dpdk/spdk3/config 00:35:30.691 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:30.691 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:30.691 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:30.691 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:30.691 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:30.691 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:30.691 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:30.691 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:30.691 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:30.691 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:30.691 Removing: /var/run/dpdk/spdk4/config 00:35:30.691 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:30.691 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:30.691 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:30.691 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:30.691 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:30.691 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:30.691 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:30.691 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:30.691 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:30.691 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:30.691 Removing: /dev/shm/bdev_svc_trace.1 00:35:30.691 Removing: /dev/shm/nvmf_trace.0 00:35:30.691 Removing: /dev/shm/spdk_tgt_trace.pid1265948 00:35:30.691 Removing: /var/run/dpdk/spdk0 00:35:30.691 Removing: /var/run/dpdk/spdk1 00:35:30.691 Removing: /var/run/dpdk/spdk2 00:35:30.691 Removing: /var/run/dpdk/spdk3 00:35:30.691 Removing: /var/run/dpdk/spdk4 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1070586 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1263809 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1264867 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1265948 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1266589 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1267533 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1267558 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1268541 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1268755 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1268988 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1270632 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1271909 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1272208 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1272495 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1272799 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1273089 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1273339 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1273595 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1273876 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1274621 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1277626 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1277884 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1278138 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1278141 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1278638 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1278645 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1279140 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1279146 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1279429 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1279633 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1279768 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1279895 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1280356 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1280523 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1280872 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1284722 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1289123 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1299661 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1300230 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1304514 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1304959 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1309247 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1315052 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1317737 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1327949 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1336975 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1339239 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1340161 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1357049 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1361115 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1406626 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1412024 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1417794 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1424290 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1424292 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1425219 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1426111 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1426848 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1427510 00:35:30.691 Removing: /var/run/dpdk/spdk_pid1427513 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1427751 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1427888 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1427981 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1428807 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1429594 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1430508 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1431088 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1431196 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1431426 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1432988 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1434157 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1442261 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1471625 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1476143 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1477743 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1479586 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1479610 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1479838 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1480042 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1480483 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1482194 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1483105 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1483484 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1485776 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1486130 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1486770 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1491037 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1496431 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1496432 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1496433 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1500234 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1508564 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1513109 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1519186 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1520409 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1521953 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1523276 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1527978 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1532312 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1536341 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1543934 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1543940 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1548661 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1548889 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1549120 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1549576 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1549583 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1554067 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1554656 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1559002 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1562240 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1567451 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1572780 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1581567 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1588551 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1588559 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1607378 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1608081 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1608636 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1609640 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1610240 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1610888 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1611405 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1611876 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1616126 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1616360 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1622410 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1622481 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1627946 00:35:30.951 Removing: /var/run/dpdk/spdk_pid1632122 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1641777 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1642465 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1646715 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1646970 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1651225 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1657382 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1659954 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1670119 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1678802 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1680591 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1681388 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1697464 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1701386 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1704687 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1712443 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1712449 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1717478 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1719444 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1721406 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1722452 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1724428 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1725592 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1734293 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1734899 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1735358 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1737624 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1738091 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1738617 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1742603 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1742607 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1744253 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1744805 00:35:31.210 Removing: /var/run/dpdk/spdk_pid1744816 00:35:31.210 Removing: /var/run/dpdk/spdk_pid758034 00:35:31.210 Clean 00:35:31.210 14:45:20 -- common/autotest_common.sh@1453 -- # return 0 00:35:31.210 14:45:20 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:31.210 14:45:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:31.210 14:45:20 -- common/autotest_common.sh@10 -- # set +x 00:35:31.210 14:45:20 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:31.210 14:45:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:31.210 14:45:20 -- common/autotest_common.sh@10 -- # set +x 00:35:31.469 14:45:20 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:31.469 14:45:20 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:31.469 14:45:20 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:31.469 14:45:20 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:31.469 14:45:20 -- spdk/autotest.sh@398 -- # hostname 00:35:31.469 14:45:20 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:31.469 geninfo: WARNING: invalid characters removed from testname! 00:35:53.531 14:45:41 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:54.913 14:45:44 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:56.819 14:45:46 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:58.725 14:45:47 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:00.632 14:45:49 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:02.539 14:45:51 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:04.479 14:45:53 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:04.479 14:45:53 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:04.479 14:45:53 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:04.479 14:45:53 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:04.479 14:45:53 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:04.479 14:45:53 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:04.479 + [[ -n 1186748 ]] 00:36:04.479 + sudo kill 1186748 00:36:04.490 [Pipeline] } 00:36:04.505 [Pipeline] // stage 00:36:04.510 [Pipeline] } 00:36:04.525 [Pipeline] // timeout 00:36:04.530 [Pipeline] } 00:36:04.544 [Pipeline] // catchError 00:36:04.550 [Pipeline] } 00:36:04.565 [Pipeline] // wrap 00:36:04.571 [Pipeline] } 00:36:04.585 [Pipeline] // catchError 00:36:04.594 [Pipeline] stage 00:36:04.597 [Pipeline] { (Epilogue) 00:36:04.610 [Pipeline] catchError 00:36:04.612 [Pipeline] { 00:36:04.625 [Pipeline] echo 00:36:04.627 Cleanup processes 00:36:04.634 [Pipeline] sh 00:36:04.923 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:04.923 1755910 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:04.938 [Pipeline] sh 00:36:05.229 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:05.229 ++ grep -v 'sudo pgrep' 00:36:05.229 ++ awk '{print $1}' 00:36:05.229 + sudo kill -9 00:36:05.229 + true 00:36:05.241 [Pipeline] sh 00:36:05.527 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:17.753 [Pipeline] sh 00:36:18.040 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:18.040 Artifacts sizes are good 00:36:18.055 [Pipeline] archiveArtifacts 00:36:18.062 Archiving artifacts 00:36:18.202 [Pipeline] sh 00:36:18.488 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:18.502 [Pipeline] cleanWs 00:36:18.512 [WS-CLEANUP] Deleting project workspace... 00:36:18.512 [WS-CLEANUP] Deferred wipeout is used... 00:36:18.519 [WS-CLEANUP] done 00:36:18.521 [Pipeline] } 00:36:18.537 [Pipeline] // catchError 00:36:18.550 [Pipeline] sh 00:36:18.862 + logger -p user.info -t JENKINS-CI 00:36:18.898 [Pipeline] } 00:36:18.911 [Pipeline] // stage 00:36:18.916 [Pipeline] } 00:36:18.930 [Pipeline] // node 00:36:18.935 [Pipeline] End of Pipeline 00:36:18.973 Finished: SUCCESS